Test Report: QEMU_macOS 19749

                    
                      50b5d8ee62174b462904730e907edeaa222f14db:2024-10-11:36607
                    
                

Test fail (99/273)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 27.76
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.28
22 TestOffline 10
47 TestCertOptions 10.11
48 TestCertExpiration 195.56
49 TestDockerFlags 10.28
50 TestForceSystemdFlag 10.33
51 TestForceSystemdEnv 10
96 TestFunctional/parallel/ServiceCmdConnect 34.31
161 TestMultiControlPlane/serial/StartCluster 725.37
162 TestMultiControlPlane/serial/DeployApp 107.1
163 TestMultiControlPlane/serial/PingHostFromPods 0.1
164 TestMultiControlPlane/serial/AddWorkerNode 0.08
165 TestMultiControlPlane/serial/NodeLabels 0.06
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
168 TestMultiControlPlane/serial/StopSecondaryNode 0.12
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
170 TestMultiControlPlane/serial/RestartSecondaryNode 0.15
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.09
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 966.75
183 TestJSONOutput/start/Command 725.26
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.09
195 TestJSONOutput/unpause/Command 0.06
215 TestMountStart/serial/StartWithMountFirst 10.23
218 TestMultiNode/serial/FreshStart2Nodes 9.85
219 TestMultiNode/serial/DeployApp2Nodes 119.22
220 TestMultiNode/serial/PingHostFrom2Pods 0.1
221 TestMultiNode/serial/AddNode 0.08
222 TestMultiNode/serial/MultiNodeLabels 0.07
223 TestMultiNode/serial/ProfileList 0.09
224 TestMultiNode/serial/CopyFile 0.07
225 TestMultiNode/serial/StopNode 0.15
226 TestMultiNode/serial/StartAfterStop 51.38
227 TestMultiNode/serial/RestartKeepsNodes 7.51
228 TestMultiNode/serial/DeleteNode 0.11
229 TestMultiNode/serial/StopMultiNode 2.12
230 TestMultiNode/serial/RestartMultiNode 5.27
231 TestMultiNode/serial/ValidateNameConflict 19.91
235 TestPreload 10.07
237 TestScheduledStopUnix 10.09
238 TestSkaffold 13.3
241 TestRunningBinaryUpgrade 605.62
243 TestKubernetesUpgrade 17.4
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.18
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 0.89
259 TestStoppedBinaryUpgrade/Upgrade 574.16
261 TestPause/serial/Start 10.14
271 TestNoKubernetes/serial/StartWithK8s 9.92
272 TestNoKubernetes/serial/StartWithStopK8s 5.28
273 TestNoKubernetes/serial/Start 5.33
277 TestNoKubernetes/serial/StartNoArgs 5.34
279 TestNetworkPlugins/group/auto/Start 9.81
280 TestNetworkPlugins/group/calico/Start 9.77
281 TestNetworkPlugins/group/custom-flannel/Start 9.86
282 TestNetworkPlugins/group/false/Start 9.87
283 TestNetworkPlugins/group/kindnet/Start 9.8
284 TestNetworkPlugins/group/flannel/Start 9.77
285 TestNetworkPlugins/group/enable-default-cni/Start 9.89
286 TestNetworkPlugins/group/bridge/Start 9.91
288 TestNetworkPlugins/group/kubenet/Start 10.13
290 TestStartStop/group/old-k8s-version/serial/FirstStart 9.92
291 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
292 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
295 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
296 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
297 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
298 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
299 TestStartStop/group/old-k8s-version/serial/Pause 0.11
301 TestStartStop/group/no-preload/serial/FirstStart 10.05
303 TestStartStop/group/embed-certs/serial/FirstStart 12.38
304 TestStartStop/group/no-preload/serial/DeployApp 0.11
305 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.13
307 TestStartStop/group/embed-certs/serial/DeployApp 0.09
308 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
311 TestStartStop/group/no-preload/serial/SecondStart 5.27
313 TestStartStop/group/embed-certs/serial/SecondStart 5.45
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
317 TestStartStop/group/no-preload/serial/Pause 0.11
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.91
320 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
321 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
322 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
323 TestStartStop/group/embed-certs/serial/Pause 0.11
325 TestStartStop/group/newest-cni/serial/FirstStart 10.04
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
333 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
335 TestStartStop/group/newest-cni/serial/SecondStart 5.27
336 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
337 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
338 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
339 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
343 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (27.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-503000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-503000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (27.756829917s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8fa3b39e-25e4-4a95-976c-aecc3281e9ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-503000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc868ae1-bceb-4d91-a716-d0b4aaabad4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19749"}}
	{"specversion":"1.0","id":"1df7dfc6-eb06-4c21-a244-9698cfce74ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig"}}
	{"specversion":"1.0","id":"60a27dba-5eb4-45c8-a90f-d10bfbd57d9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"e0db3702-f4fe-406b-8110-9bdda0693d8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b28cbaab-5912-4420-b4ce-2996486b767f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube"}}
	{"specversion":"1.0","id":"a27c0868-5eb9-48bd-be9b-333acfcbd630","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"6f37e85b-6045-493e-b229-294752f7147c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"60f12a07-ff8c-430c-b9de-07a4cfcf25fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"f512976f-5206-4a28-88fa-66e7dc84bf4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8cb2fea2-38aa-4707-9a06-7bfcc86ccf1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-503000\" primary control-plane node in \"download-only-503000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4b4293eb-03a7-42f6-b9b7-93e089adf188","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"11ae7b6d-31de-4572-8e71-2838fc86ad9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10a309060 0x10a309060 0x10a309060 0x10a309060 0x10a309060 0x10a309060 0x10a309060] Decompressors:map[bz2:0x14000915990 gz:0x14000915998 tar:0x14000915940 tar.bz2:0x14000915950 tar.gz:0x14000915960 tar.xz:0x14000915970 tar.zst:0x14000915980 tbz2:0x14000915950 tgz:0x14
000915960 txz:0x14000915970 tzst:0x14000915980 xz:0x140009159a0 zip:0x140009159b0 zst:0x140009159a8] Getters:map[file:0x140016f86f0 http:0x140006901e0 https:0x14000690230] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"7b568f3d-de31-4103-8aa8-78c89fdbb21f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 13:57:30.726860    1708 out.go:345] Setting OutFile to fd 1 ...
	I1011 13:57:30.727027    1708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 13:57:30.727031    1708 out.go:358] Setting ErrFile to fd 2...
	I1011 13:57:30.727033    1708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 13:57:30.727155    1708 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	W1011 13:57:30.727239    1708 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19749-1186/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19749-1186/.minikube/config/config.json: no such file or directory
	I1011 13:57:30.728673    1708 out.go:352] Setting JSON to true
	I1011 13:57:30.748198    1708 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1620,"bootTime":1728678630,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 13:57:30.748264    1708 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 13:57:30.756623    1708 out.go:97] [download-only-503000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 13:57:30.756800    1708 notify.go:220] Checking for updates...
	W1011 13:57:30.756818    1708 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball: no such file or directory
	I1011 13:57:30.760551    1708 out.go:169] MINIKUBE_LOCATION=19749
	I1011 13:57:30.766632    1708 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 13:57:30.771551    1708 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 13:57:30.775576    1708 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 13:57:30.778627    1708 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	W1011 13:57:30.784573    1708 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1011 13:57:30.784803    1708 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 13:57:30.788633    1708 out.go:97] Using the qemu2 driver based on user configuration
	I1011 13:57:30.788654    1708 start.go:297] selected driver: qemu2
	I1011 13:57:30.788671    1708 start.go:901] validating driver "qemu2" against <nil>
	I1011 13:57:30.788744    1708 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 13:57:30.792527    1708 out.go:169] Automatically selected the socket_vmnet network
	I1011 13:57:30.798552    1708 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1011 13:57:30.798633    1708 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1011 13:57:30.798673    1708 cni.go:84] Creating CNI manager for ""
	I1011 13:57:30.798723    1708 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1011 13:57:30.798784    1708 start.go:340] cluster config:
	{Name:download-only-503000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-503000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 13:57:30.803711    1708 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 13:57:30.807577    1708 out.go:97] Downloading VM boot image ...
	I1011 13:57:30.807592    1708 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso
	I1011 13:57:42.575051    1708 out.go:97] Starting "download-only-503000" primary control-plane node in "download-only-503000" cluster
	I1011 13:57:42.575084    1708 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1011 13:57:42.633317    1708 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1011 13:57:42.633349    1708 cache.go:56] Caching tarball of preloaded images
	I1011 13:57:42.633574    1708 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1011 13:57:42.638627    1708 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1011 13:57:42.638634    1708 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1011 13:57:42.719809    1708 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1011 13:57:57.215485    1708 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1011 13:57:57.215661    1708 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1011 13:57:57.911110    1708 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1011 13:57:57.911305    1708 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/download-only-503000/config.json ...
	I1011 13:57:57.911324    1708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/download-only-503000/config.json: {Name:mkd2b98657911ccd623de976d2d8a0b17645864c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 13:57:57.911598    1708 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1011 13:57:57.911846    1708 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1011 13:57:58.402547    1708 out.go:193] 
	W1011 13:57:58.407645    1708 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10a309060 0x10a309060 0x10a309060 0x10a309060 0x10a309060 0x10a309060 0x10a309060] Decompressors:map[bz2:0x14000915990 gz:0x14000915998 tar:0x14000915940 tar.bz2:0x14000915950 tar.gz:0x14000915960 tar.xz:0x14000915970 tar.zst:0x14000915980 tbz2:0x14000915950 tgz:0x14000915960 txz:0x14000915970 tzst:0x14000915980 xz:0x140009159a0 zip:0x140009159b0 zst:0x140009159a8] Getters:map[file:0x140016f86f0 http:0x140006901e0 https:0x14000690230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1011 13:57:58.407666    1708 out_reason.go:110] 
	W1011 13:57:58.415481    1708 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 13:57:58.418553    1708 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-503000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (27.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.28s)

                                                
                                                
=== RUN   TestBinaryMirror
I1011 13:58:11.604506    1707 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-969000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-969000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 : exit status 40 (169.490583ms)

                                                
                                                
-- stdout --
	* [binary-mirror-969000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-969000" primary control-plane node in "binary-mirror-969000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 13:58:11.667815    1773 out.go:345] Setting OutFile to fd 1 ...
	I1011 13:58:11.667979    1773 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 13:58:11.667982    1773 out.go:358] Setting ErrFile to fd 2...
	I1011 13:58:11.667985    1773 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 13:58:11.668102    1773 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 13:58:11.669299    1773 out.go:352] Setting JSON to false
	I1011 13:58:11.686771    1773 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1661,"bootTime":1728678630,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 13:58:11.686849    1773 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 13:58:11.691833    1773 out.go:177] * [binary-mirror-969000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 13:58:11.698984    1773 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 13:58:11.699079    1773 notify.go:220] Checking for updates...
	I1011 13:58:11.705925    1773 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 13:58:11.708959    1773 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 13:58:11.711994    1773 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 13:58:11.715000    1773 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 13:58:11.718187    1773 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 13:58:11.724261    1773 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 13:58:11.730973    1773 start.go:297] selected driver: qemu2
	I1011 13:58:11.730981    1773 start.go:901] validating driver "qemu2" against <nil>
	I1011 13:58:11.731048    1773 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 13:58:11.732464    1773 out.go:177] * Automatically selected the socket_vmnet network
	I1011 13:58:11.737553    1773 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1011 13:58:11.737654    1773 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1011 13:58:11.737677    1773 cni.go:84] Creating CNI manager for ""
	I1011 13:58:11.737700    1773 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 13:58:11.737707    1773 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 13:58:11.737763    1773 start.go:340] cluster config:
	{Name:binary-mirror-969000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:49312 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 13:58:11.742448    1773 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 13:58:11.749966    1773 out.go:177] * Starting "binary-mirror-969000" primary control-plane node in "binary-mirror-969000" cluster
	I1011 13:58:11.753933    1773 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 13:58:11.753949    1773 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 13:58:11.753959    1773 cache.go:56] Caching tarball of preloaded images
	I1011 13:58:11.754035    1773 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 13:58:11.754041    1773 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 13:58:11.754247    1773 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/binary-mirror-969000/config.json ...
	I1011 13:58:11.754258    1773 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/binary-mirror-969000/config.json: {Name:mk218e24dcbab9aae0a0763bd01e8259da1d4900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 13:58:11.754579    1773 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 13:58:11.754632    1773 download.go:107] Downloading: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I1011 13:58:11.781320    1773 out.go:201] 
	W1011 13:58:11.785048    1773 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x105c61060 0x105c61060 0x105c61060 0x105c61060 0x105c61060 0x105c61060 0x105c61060] Decompressors:map[bz2:0x1400080a000 gz:0x1400080a008 tar:0x14000627e70 tar.bz2:0x14000627e80 tar.gz:0x14000627e90 tar.xz:0x14000627ea0 tar.zst:0x14000627eb0 tbz2:0x14000627e80 tgz:0x14000627e90 txz:0x14000627ea0 tzst:0x14000627eb0 xz:0x1400080a010 zip:0x1400080a020 zst:0x1400080a018] Getters:map[file:0x1400068bca0 http:0x140007dceb0 https:0x140007dcf00] Dir:
false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x105c61060 0x105c61060 0x105c61060 0x105c61060 0x105c61060 0x105c61060 0x105c61060] Decompressors:map[bz2:0x1400080a000 gz:0x1400080a008 tar:0x14000627e70 tar.bz2:0x14000627e80 tar.gz:0x14000627e90 tar.xz:0x14000627ea0 tar.zst:0x14000627eb0 tbz2:0x14000627e80 tgz:0x14000627e90 txz:0x14000627ea0 tzst:0x14000627eb0 xz:0x1400080a010 zip:0x1400080a020 zst:0x1400080a018] Getters:map[file:0x1400068bca0 http:0x140007dceb0 https:0x140007dcf00] Dir:false ProgressListener:<nil> Insecure:fals
e DisableSymlinks:false Options:[]}: unexpected EOF
	W1011 13:58:11.785056    1773 out.go:270] * 
	* 
	W1011 13:58:11.785653    1773 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 13:58:11.799897    1773 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-969000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:49312" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-969000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-969000
--- FAIL: TestBinaryMirror (0.28s)

                                                
                                    
x
+
TestOffline (10s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-372000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-372000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.878050292s)

                                                
                                                
-- stdout --
	* [offline-docker-372000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-372000" primary control-plane node in "offline-docker-372000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-372000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:58:03.763140    4415 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:58:03.763296    4415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:58:03.763300    4415 out.go:358] Setting ErrFile to fd 2...
	I1011 14:58:03.763310    4415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:58:03.763475    4415 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:58:03.764592    4415 out.go:352] Setting JSON to false
	I1011 14:58:03.784045    4415 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5253,"bootTime":1728678630,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 14:58:03.784122    4415 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 14:58:03.789406    4415 out.go:177] * [offline-docker-372000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 14:58:03.797280    4415 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 14:58:03.797348    4415 notify.go:220] Checking for updates...
	I1011 14:58:03.803214    4415 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 14:58:03.806120    4415 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 14:58:03.809249    4415 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 14:58:03.812123    4415 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 14:58:03.815200    4415 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 14:58:03.818597    4415 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:58:03.818667    4415 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 14:58:03.822089    4415 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 14:58:03.829192    4415 start.go:297] selected driver: qemu2
	I1011 14:58:03.829203    4415 start.go:901] validating driver "qemu2" against <nil>
	I1011 14:58:03.829211    4415 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 14:58:03.831385    4415 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 14:58:03.834149    4415 out.go:177] * Automatically selected the socket_vmnet network
	I1011 14:58:03.837280    4415 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 14:58:03.837298    4415 cni.go:84] Creating CNI manager for ""
	I1011 14:58:03.837325    4415 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 14:58:03.837336    4415 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 14:58:03.837372    4415 start.go:340] cluster config:
	{Name:offline-docker-372000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-372000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 14:58:03.841867    4415 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 14:58:03.850192    4415 out.go:177] * Starting "offline-docker-372000" primary control-plane node in "offline-docker-372000" cluster
	I1011 14:58:03.854215    4415 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 14:58:03.854249    4415 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 14:58:03.854260    4415 cache.go:56] Caching tarball of preloaded images
	I1011 14:58:03.854395    4415 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 14:58:03.854404    4415 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 14:58:03.854480    4415 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/offline-docker-372000/config.json ...
	I1011 14:58:03.854491    4415 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/offline-docker-372000/config.json: {Name:mk411cb3b117af266d04f3c7953610823b23f165 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 14:58:03.854893    4415 start.go:360] acquireMachinesLock for offline-docker-372000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:58:03.854946    4415 start.go:364] duration metric: took 45.166µs to acquireMachinesLock for "offline-docker-372000"
	I1011 14:58:03.854960    4415 start.go:93] Provisioning new machine with config: &{Name:offline-docker-372000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-372000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 14:58:03.855003    4415 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 14:58:03.859173    4415 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1011 14:58:03.874462    4415 start.go:159] libmachine.API.Create for "offline-docker-372000" (driver="qemu2")
	I1011 14:58:03.874495    4415 client.go:168] LocalClient.Create starting
	I1011 14:58:03.874579    4415 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 14:58:03.874616    4415 main.go:141] libmachine: Decoding PEM data...
	I1011 14:58:03.874627    4415 main.go:141] libmachine: Parsing certificate...
	I1011 14:58:03.874669    4415 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 14:58:03.874698    4415 main.go:141] libmachine: Decoding PEM data...
	I1011 14:58:03.874705    4415 main.go:141] libmachine: Parsing certificate...
	I1011 14:58:03.875068    4415 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 14:58:04.032197    4415 main.go:141] libmachine: Creating SSH key...
	I1011 14:58:04.168346    4415 main.go:141] libmachine: Creating Disk image...
	I1011 14:58:04.168355    4415 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 14:58:04.168694    4415 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/offline-docker-372000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/offline-docker-372000/disk.qcow2
	I1011 14:58:04.184572    4415 main.go:141] libmachine: STDOUT: 
	I1011 14:58:04.184681    4415 main.go:141] libmachine: STDERR: 
	I1011 14:58:04.184746    4415 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/offline-docker-372000/disk.qcow2 +20000M
	I1011 14:58:04.193949    4415 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 14:58:04.194042    4415 main.go:141] libmachine: STDERR: 
	I1011 14:58:04.194065    4415 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/offline-docker-372000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/offline-docker-372000/disk.qcow2
	I1011 14:58:04.194070    4415 main.go:141] libmachine: Starting QEMU VM...
	I1011 14:58:04.194083    4415 qemu.go:418] Using hvf for hardware acceleration
	I1011 14:58:04.194123    4415 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/offline-docker-372000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/offline-docker-372000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/offline-docker-372000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:b7:69:b8:b5:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/offline-docker-372000/disk.qcow2
	I1011 14:58:04.196341    4415 main.go:141] libmachine: STDOUT: 
	I1011 14:58:04.196465    4415 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 14:58:04.196486    4415 client.go:171] duration metric: took 321.987833ms to LocalClient.Create
	I1011 14:58:06.198522    4415 start.go:128] duration metric: took 2.343542542s to createHost
	I1011 14:58:06.198539    4415 start.go:83] releasing machines lock for "offline-docker-372000", held for 2.343619792s
	W1011 14:58:06.198554    4415 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:58:06.204575    4415 out.go:177] * Deleting "offline-docker-372000" in qemu2 ...
	W1011 14:58:06.217944    4415 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:58:06.217960    4415 start.go:729] Will try again in 5 seconds ...
	I1011 14:58:11.220155    4415 start.go:360] acquireMachinesLock for offline-docker-372000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:58:11.220807    4415 start.go:364] duration metric: took 482.125µs to acquireMachinesLock for "offline-docker-372000"
	I1011 14:58:11.220954    4415 start.go:93] Provisioning new machine with config: &{Name:offline-docker-372000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-372000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 14:58:11.221254    4415 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 14:58:11.228016    4415 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1011 14:58:11.277236    4415 start.go:159] libmachine.API.Create for "offline-docker-372000" (driver="qemu2")
	I1011 14:58:11.277284    4415 client.go:168] LocalClient.Create starting
	I1011 14:58:11.277425    4415 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 14:58:11.277522    4415 main.go:141] libmachine: Decoding PEM data...
	I1011 14:58:11.277540    4415 main.go:141] libmachine: Parsing certificate...
	I1011 14:58:11.277609    4415 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 14:58:11.277665    4415 main.go:141] libmachine: Decoding PEM data...
	I1011 14:58:11.277678    4415 main.go:141] libmachine: Parsing certificate...
	I1011 14:58:11.278235    4415 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 14:58:11.448166    4415 main.go:141] libmachine: Creating SSH key...
	I1011 14:58:11.539165    4415 main.go:141] libmachine: Creating Disk image...
	I1011 14:58:11.539172    4415 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 14:58:11.539407    4415 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/offline-docker-372000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/offline-docker-372000/disk.qcow2
	I1011 14:58:11.549334    4415 main.go:141] libmachine: STDOUT: 
	I1011 14:58:11.549361    4415 main.go:141] libmachine: STDERR: 
	I1011 14:58:11.549417    4415 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/offline-docker-372000/disk.qcow2 +20000M
	I1011 14:58:11.557848    4415 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 14:58:11.557865    4415 main.go:141] libmachine: STDERR: 
	I1011 14:58:11.557876    4415 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/offline-docker-372000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/offline-docker-372000/disk.qcow2
	I1011 14:58:11.557881    4415 main.go:141] libmachine: Starting QEMU VM...
	I1011 14:58:11.557890    4415 qemu.go:418] Using hvf for hardware acceleration
	I1011 14:58:11.557925    4415 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/offline-docker-372000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/offline-docker-372000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/offline-docker-372000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:0d:fe:55:f2:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/offline-docker-372000/disk.qcow2
	I1011 14:58:11.559736    4415 main.go:141] libmachine: STDOUT: 
	I1011 14:58:11.559761    4415 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 14:58:11.559774    4415 client.go:171] duration metric: took 282.48825ms to LocalClient.Create
	I1011 14:58:13.561850    4415 start.go:128] duration metric: took 2.340609542s to createHost
	I1011 14:58:13.561883    4415 start.go:83] releasing machines lock for "offline-docker-372000", held for 2.341079583s
	W1011 14:58:13.562047    4415 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-372000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-372000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:58:13.578253    4415 out.go:201] 
	W1011 14:58:13.583191    4415 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 14:58:13.583207    4415 out.go:270] * 
	* 
	W1011 14:58:13.583976    4415 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 14:58:13.597197    4415 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-372000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-10-11 14:58:13.60759 -0700 PDT m=+3643.017749959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-372000 -n offline-docker-372000
I1011 14:58:13.620547    1707 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3592822065/001/docker-machine-driver-hyperkit]
I1011 14:58:13.633935    1707 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3592822065/001/docker-machine-driver-hyperkit]
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-372000 -n offline-docker-372000: exit status 7 (37.338583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-372000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-372000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-372000
I1011 14:58:13.657979    1707 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1011 14:58:13.658170    1707 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
--- FAIL: TestOffline (10.00s)

                                                
                                    
x
+
TestCertOptions (10.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-754000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-754000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.831643s)

                                                
                                                
-- stdout --
	* [cert-options-754000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-754000" primary control-plane node in "cert-options-754000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-754000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-754000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-754000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-754000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-754000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (84.686167ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-754000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-754000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-754000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-754000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-754000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-754000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (45.705833ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-754000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-754000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-754000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-754000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-754000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-10-11 14:58:44.000243 -0700 PDT m=+3673.410996251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-754000 -n cert-options-754000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-754000 -n cert-options-754000: exit status 7 (33.924375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-754000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-754000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-754000
--- FAIL: TestCertOptions (10.11s)
E1011 14:58:45.207264    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestCertExpiration (195.56s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-534000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-534000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.155723208s)

                                                
                                                
-- stdout --
	* [cert-expiration-534000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-534000" primary control-plane node in "cert-expiration-534000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-534000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-534000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-534000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-534000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-534000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.242640375s)

                                                
                                                
-- stdout --
	* [cert-expiration-534000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-534000" primary control-plane node in "cert-expiration-534000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-534000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-534000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-534000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-534000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-534000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-534000" primary control-plane node in "cert-expiration-534000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-534000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-534000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-534000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-10-11 15:01:44.210265 -0700 PDT m=+3853.633177709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-534000 -n cert-expiration-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-534000 -n cert-expiration-534000: exit status 7 (74.462ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-534000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-534000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-534000
--- FAIL: TestCertExpiration (195.56s)

                                                
                                    
x
+
TestDockerFlags (10.28s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-785000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-785000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.03712825s)

                                                
                                                
-- stdout --
	* [docker-flags-785000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-785000" primary control-plane node in "docker-flags-785000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-785000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:58:23.760313    4607 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:58:23.760473    4607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:58:23.760476    4607 out.go:358] Setting ErrFile to fd 2...
	I1011 14:58:23.760479    4607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:58:23.760615    4607 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:58:23.762057    4607 out.go:352] Setting JSON to false
	I1011 14:58:23.779886    4607 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5273,"bootTime":1728678630,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 14:58:23.779949    4607 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 14:58:23.785752    4607 out.go:177] * [docker-flags-785000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 14:58:23.793762    4607 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 14:58:23.793810    4607 notify.go:220] Checking for updates...
	I1011 14:58:23.800767    4607 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 14:58:23.803755    4607 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 14:58:23.806776    4607 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 14:58:23.809743    4607 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 14:58:23.812788    4607 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 14:58:23.816098    4607 config.go:182] Loaded profile config "force-systemd-flag-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:58:23.816178    4607 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:58:23.816220    4607 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 14:58:23.820736    4607 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 14:58:23.827648    4607 start.go:297] selected driver: qemu2
	I1011 14:58:23.827656    4607 start.go:901] validating driver "qemu2" against <nil>
	I1011 14:58:23.827663    4607 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 14:58:23.830191    4607 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 14:58:23.832745    4607 out.go:177] * Automatically selected the socket_vmnet network
	I1011 14:58:23.835972    4607 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1011 14:58:23.835998    4607 cni.go:84] Creating CNI manager for ""
	I1011 14:58:23.836021    4607 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 14:58:23.836029    4607 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 14:58:23.836059    4607 start.go:340] cluster config:
	{Name:docker-flags-785000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-785000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 14:58:23.840673    4607 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 14:58:23.848697    4607 out.go:177] * Starting "docker-flags-785000" primary control-plane node in "docker-flags-785000" cluster
	I1011 14:58:23.852519    4607 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 14:58:23.852535    4607 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 14:58:23.852544    4607 cache.go:56] Caching tarball of preloaded images
	I1011 14:58:23.852626    4607 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 14:58:23.852631    4607 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 14:58:23.852691    4607 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/docker-flags-785000/config.json ...
	I1011 14:58:23.852702    4607 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/docker-flags-785000/config.json: {Name:mkb82445052ebadf6d5928df76e32088d6495d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 14:58:23.852966    4607 start.go:360] acquireMachinesLock for docker-flags-785000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:58:23.853015    4607 start.go:364] duration metric: took 42.875µs to acquireMachinesLock for "docker-flags-785000"
	I1011 14:58:23.853027    4607 start.go:93] Provisioning new machine with config: &{Name:docker-flags-785000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-785000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 14:58:23.853064    4607 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 14:58:23.857702    4607 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1011 14:58:23.874261    4607 start.go:159] libmachine.API.Create for "docker-flags-785000" (driver="qemu2")
	I1011 14:58:23.874304    4607 client.go:168] LocalClient.Create starting
	I1011 14:58:23.874398    4607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 14:58:23.874443    4607 main.go:141] libmachine: Decoding PEM data...
	I1011 14:58:23.874463    4607 main.go:141] libmachine: Parsing certificate...
	I1011 14:58:23.874502    4607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 14:58:23.874532    4607 main.go:141] libmachine: Decoding PEM data...
	I1011 14:58:23.874539    4607 main.go:141] libmachine: Parsing certificate...
	I1011 14:58:23.874896    4607 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 14:58:24.026473    4607 main.go:141] libmachine: Creating SSH key...
	I1011 14:58:24.259976    4607 main.go:141] libmachine: Creating Disk image...
	I1011 14:58:24.259988    4607 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 14:58:24.260265    4607 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/docker-flags-785000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/docker-flags-785000/disk.qcow2
	I1011 14:58:24.270846    4607 main.go:141] libmachine: STDOUT: 
	I1011 14:58:24.270875    4607 main.go:141] libmachine: STDERR: 
	I1011 14:58:24.270932    4607 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/docker-flags-785000/disk.qcow2 +20000M
	I1011 14:58:24.279397    4607 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 14:58:24.279410    4607 main.go:141] libmachine: STDERR: 
	I1011 14:58:24.279423    4607 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/docker-flags-785000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/docker-flags-785000/disk.qcow2
	I1011 14:58:24.279428    4607 main.go:141] libmachine: Starting QEMU VM...
	I1011 14:58:24.279441    4607 qemu.go:418] Using hvf for hardware acceleration
	I1011 14:58:24.279466    4607 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/docker-flags-785000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/docker-flags-785000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/docker-flags-785000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:25:06:e0:3c:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/docker-flags-785000/disk.qcow2
	I1011 14:58:24.281236    4607 main.go:141] libmachine: STDOUT: 
	I1011 14:58:24.281249    4607 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 14:58:24.281275    4607 client.go:171] duration metric: took 406.959292ms to LocalClient.Create
	I1011 14:58:26.283511    4607 start.go:128] duration metric: took 2.430461709s to createHost
	I1011 14:58:26.283566    4607 start.go:83] releasing machines lock for "docker-flags-785000", held for 2.430574875s
	W1011 14:58:26.283615    4607 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:58:26.307752    4607 out.go:177] * Deleting "docker-flags-785000" in qemu2 ...
	W1011 14:58:26.329606    4607 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:58:26.329623    4607 start.go:729] Will try again in 5 seconds ...
	I1011 14:58:31.331814    4607 start.go:360] acquireMachinesLock for docker-flags-785000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:58:31.411621    4607 start.go:364] duration metric: took 79.660709ms to acquireMachinesLock for "docker-flags-785000"
	I1011 14:58:31.411762    4607 start.go:93] Provisioning new machine with config: &{Name:docker-flags-785000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-785000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 14:58:31.411961    4607 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 14:58:31.424552    4607 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1011 14:58:31.473430    4607 start.go:159] libmachine.API.Create for "docker-flags-785000" (driver="qemu2")
	I1011 14:58:31.473478    4607 client.go:168] LocalClient.Create starting
	I1011 14:58:31.473675    4607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 14:58:31.473766    4607 main.go:141] libmachine: Decoding PEM data...
	I1011 14:58:31.473790    4607 main.go:141] libmachine: Parsing certificate...
	I1011 14:58:31.473854    4607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 14:58:31.473935    4607 main.go:141] libmachine: Decoding PEM data...
	I1011 14:58:31.473951    4607 main.go:141] libmachine: Parsing certificate...
	I1011 14:58:31.474686    4607 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 14:58:31.643448    4607 main.go:141] libmachine: Creating SSH key...
	I1011 14:58:31.694982    4607 main.go:141] libmachine: Creating Disk image...
	I1011 14:58:31.694987    4607 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 14:58:31.695225    4607 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/docker-flags-785000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/docker-flags-785000/disk.qcow2
	I1011 14:58:31.705217    4607 main.go:141] libmachine: STDOUT: 
	I1011 14:58:31.705239    4607 main.go:141] libmachine: STDERR: 
	I1011 14:58:31.705308    4607 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/docker-flags-785000/disk.qcow2 +20000M
	I1011 14:58:31.713720    4607 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 14:58:31.713735    4607 main.go:141] libmachine: STDERR: 
	I1011 14:58:31.713744    4607 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/docker-flags-785000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/docker-flags-785000/disk.qcow2
	I1011 14:58:31.713748    4607 main.go:141] libmachine: Starting QEMU VM...
	I1011 14:58:31.713761    4607 qemu.go:418] Using hvf for hardware acceleration
	I1011 14:58:31.713787    4607 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/docker-flags-785000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/docker-flags-785000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/docker-flags-785000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:8b:92:0b:28:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/docker-flags-785000/disk.qcow2
	I1011 14:58:31.715584    4607 main.go:141] libmachine: STDOUT: 
	I1011 14:58:31.715598    4607 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 14:58:31.715609    4607 client.go:171] duration metric: took 242.128041ms to LocalClient.Create
	I1011 14:58:33.717854    4607 start.go:128] duration metric: took 2.305891208s to createHost
	I1011 14:58:33.717948    4607 start.go:83] releasing machines lock for "docker-flags-785000", held for 2.306312625s
	W1011 14:58:33.718333    4607 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-785000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-785000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:58:33.732151    4607 out.go:201] 
	W1011 14:58:33.739046    4607 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 14:58:33.739073    4607 out.go:270] * 
	* 
	W1011 14:58:33.741639    4607 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 14:58:33.751951    4607 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-785000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-785000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-785000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (85.744833ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-785000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-785000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-785000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-785000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-785000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-785000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-785000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-785000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-785000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.711792ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-785000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-785000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-785000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-785000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-785000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-785000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-10-11 14:58:33.897784 -0700 PDT m=+3663.308218792
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-785000 -n docker-flags-785000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-785000 -n docker-flags-785000: exit status 7 (33.369709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-785000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-785000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-785000
--- FAIL: TestDockerFlags (10.28s)

                                                
                                    
x
+
TestForceSystemdFlag (10.33s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-818000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-818000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.125416125s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-818000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-818000" primary control-plane node in "force-systemd-flag-818000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-818000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:58:18.519102    4582 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:58:18.519275    4582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:58:18.519279    4582 out.go:358] Setting ErrFile to fd 2...
	I1011 14:58:18.519281    4582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:58:18.519410    4582 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:58:18.520654    4582 out.go:352] Setting JSON to false
	I1011 14:58:18.538036    4582 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5268,"bootTime":1728678630,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 14:58:18.538112    4582 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 14:58:18.543543    4582 out.go:177] * [force-systemd-flag-818000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 14:58:18.555788    4582 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 14:58:18.555806    4582 notify.go:220] Checking for updates...
	I1011 14:58:18.564557    4582 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 14:58:18.568543    4582 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 14:58:18.571651    4582 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 14:58:18.574577    4582 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 14:58:18.577554    4582 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 14:58:18.580958    4582 config.go:182] Loaded profile config "force-systemd-env-075000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:58:18.581040    4582 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:58:18.581093    4582 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 14:58:18.585559    4582 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 14:58:18.592574    4582 start.go:297] selected driver: qemu2
	I1011 14:58:18.592580    4582 start.go:901] validating driver "qemu2" against <nil>
	I1011 14:58:18.592587    4582 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 14:58:18.595278    4582 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 14:58:18.598485    4582 out.go:177] * Automatically selected the socket_vmnet network
	I1011 14:58:18.601682    4582 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1011 14:58:18.601703    4582 cni.go:84] Creating CNI manager for ""
	I1011 14:58:18.601732    4582 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 14:58:18.601739    4582 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 14:58:18.601774    4582 start.go:340] cluster config:
	{Name:force-systemd-flag-818000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-818000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 14:58:18.606733    4582 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 14:58:18.613560    4582 out.go:177] * Starting "force-systemd-flag-818000" primary control-plane node in "force-systemd-flag-818000" cluster
	I1011 14:58:18.617590    4582 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 14:58:18.617607    4582 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 14:58:18.617616    4582 cache.go:56] Caching tarball of preloaded images
	I1011 14:58:18.617703    4582 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 14:58:18.617710    4582 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 14:58:18.617765    4582 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/force-systemd-flag-818000/config.json ...
	I1011 14:58:18.617777    4582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/force-systemd-flag-818000/config.json: {Name:mk559d63c24e7cfcbdceb7a4dd90dcaf3689c7a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 14:58:18.618282    4582 start.go:360] acquireMachinesLock for force-systemd-flag-818000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:58:18.618336    4582 start.go:364] duration metric: took 45.084µs to acquireMachinesLock for "force-systemd-flag-818000"
	I1011 14:58:18.618349    4582 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-818000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-818000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 14:58:18.618381    4582 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 14:58:18.622566    4582 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1011 14:58:18.640772    4582 start.go:159] libmachine.API.Create for "force-systemd-flag-818000" (driver="qemu2")
	I1011 14:58:18.640804    4582 client.go:168] LocalClient.Create starting
	I1011 14:58:18.640887    4582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 14:58:18.640931    4582 main.go:141] libmachine: Decoding PEM data...
	I1011 14:58:18.640942    4582 main.go:141] libmachine: Parsing certificate...
	I1011 14:58:18.640982    4582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 14:58:18.641016    4582 main.go:141] libmachine: Decoding PEM data...
	I1011 14:58:18.641025    4582 main.go:141] libmachine: Parsing certificate...
	I1011 14:58:18.641430    4582 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 14:58:18.814263    4582 main.go:141] libmachine: Creating SSH key...
	I1011 14:58:19.089231    4582 main.go:141] libmachine: Creating Disk image...
	I1011 14:58:19.089256    4582 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 14:58:19.089540    4582 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-flag-818000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-flag-818000/disk.qcow2
	I1011 14:58:19.100191    4582 main.go:141] libmachine: STDOUT: 
	I1011 14:58:19.100210    4582 main.go:141] libmachine: STDERR: 
	I1011 14:58:19.100272    4582 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-flag-818000/disk.qcow2 +20000M
	I1011 14:58:19.108928    4582 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 14:58:19.108952    4582 main.go:141] libmachine: STDERR: 
	I1011 14:58:19.108966    4582 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-flag-818000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-flag-818000/disk.qcow2
	I1011 14:58:19.108969    4582 main.go:141] libmachine: Starting QEMU VM...
	I1011 14:58:19.108977    4582 qemu.go:418] Using hvf for hardware acceleration
	I1011 14:58:19.109014    4582 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-flag-818000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-flag-818000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-flag-818000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:9f:f8:17:c3:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-flag-818000/disk.qcow2
	I1011 14:58:19.110997    4582 main.go:141] libmachine: STDOUT: 
	I1011 14:58:19.111009    4582 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 14:58:19.111029    4582 client.go:171] duration metric: took 470.225625ms to LocalClient.Create
	I1011 14:58:21.113199    4582 start.go:128] duration metric: took 2.494834083s to createHost
	I1011 14:58:21.113244    4582 start.go:83] releasing machines lock for "force-systemd-flag-818000", held for 2.494931833s
	W1011 14:58:21.113300    4582 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:58:21.126612    4582 out.go:177] * Deleting "force-systemd-flag-818000" in qemu2 ...
	W1011 14:58:21.154966    4582 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:58:21.154996    4582 start.go:729] Will try again in 5 seconds ...
	I1011 14:58:26.157085    4582 start.go:360] acquireMachinesLock for force-systemd-flag-818000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:58:26.283688    4582 start.go:364] duration metric: took 126.507167ms to acquireMachinesLock for "force-systemd-flag-818000"
	I1011 14:58:26.283804    4582 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-818000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-818000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 14:58:26.284025    4582 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 14:58:26.298748    4582 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1011 14:58:26.348420    4582 start.go:159] libmachine.API.Create for "force-systemd-flag-818000" (driver="qemu2")
	I1011 14:58:26.348479    4582 client.go:168] LocalClient.Create starting
	I1011 14:58:26.348622    4582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 14:58:26.348694    4582 main.go:141] libmachine: Decoding PEM data...
	I1011 14:58:26.348709    4582 main.go:141] libmachine: Parsing certificate...
	I1011 14:58:26.348778    4582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 14:58:26.348836    4582 main.go:141] libmachine: Decoding PEM data...
	I1011 14:58:26.348847    4582 main.go:141] libmachine: Parsing certificate...
	I1011 14:58:26.349506    4582 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 14:58:26.518421    4582 main.go:141] libmachine: Creating SSH key...
	I1011 14:58:26.542297    4582 main.go:141] libmachine: Creating Disk image...
	I1011 14:58:26.542302    4582 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 14:58:26.542560    4582 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-flag-818000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-flag-818000/disk.qcow2
	I1011 14:58:26.552631    4582 main.go:141] libmachine: STDOUT: 
	I1011 14:58:26.552658    4582 main.go:141] libmachine: STDERR: 
	I1011 14:58:26.552717    4582 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-flag-818000/disk.qcow2 +20000M
	I1011 14:58:26.561190    4582 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 14:58:26.561206    4582 main.go:141] libmachine: STDERR: 
	I1011 14:58:26.561215    4582 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-flag-818000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-flag-818000/disk.qcow2
	I1011 14:58:26.561220    4582 main.go:141] libmachine: Starting QEMU VM...
	I1011 14:58:26.561228    4582 qemu.go:418] Using hvf for hardware acceleration
	I1011 14:58:26.561264    4582 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-flag-818000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-flag-818000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-flag-818000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:41:d9:b5:94:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-flag-818000/disk.qcow2
	I1011 14:58:26.563085    4582 main.go:141] libmachine: STDOUT: 
	I1011 14:58:26.563099    4582 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 14:58:26.563111    4582 client.go:171] duration metric: took 214.629958ms to LocalClient.Create
	I1011 14:58:28.565328    4582 start.go:128] duration metric: took 2.281264417s to createHost
	I1011 14:58:28.565398    4582 start.go:83] releasing machines lock for "force-systemd-flag-818000", held for 2.281683542s
	W1011 14:58:28.565753    4582 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-818000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-818000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:58:28.579740    4582 out.go:201] 
	W1011 14:58:28.586609    4582 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 14:58:28.586634    4582 out.go:270] * 
	* 
	W1011 14:58:28.589385    4582 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 14:58:28.598604    4582 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-818000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-818000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-818000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (84.11675ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-818000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-818000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-818000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-10-11 14:58:28.700676 -0700 PDT m=+3658.111040876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-818000 -n force-systemd-flag-818000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-818000 -n force-systemd-flag-818000: exit status 7 (36.331041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-818000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-818000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-818000
--- FAIL: TestForceSystemdFlag (10.33s)

                                                
                                    
x
+
TestForceSystemdEnv (10s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-075000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1011 14:58:15.415000    1707 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1011 14:58:15.415020    1707 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1011 14:58:15.415084    1707 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1011 14:58:15.415115    1707 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3592822065/002/docker-machine-driver-hyperkit
I1011 14:58:15.806740    1707 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3592822065/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1055de400 0x1055de400 0x1055de400 0x1055de400 0x1055de400 0x1055de400 0x1055de400] Decompressors:map[bz2:0x1400081ae20 gz:0x1400081ae28 tar:0x1400081add0 tar.bz2:0x1400081ade0 tar.gz:0x1400081adf0 tar.xz:0x1400081ae00 tar.zst:0x1400081ae10 tbz2:0x1400081ade0 tgz:0x1400081adf0 txz:0x1400081ae00 tzst:0x1400081ae10 xz:0x1400081ae30 zip:0x1400081ae40 zst:0x1400081ae38] Getters:map[file:0x14000888e10 http:0x1400073d860 https:0x1400073d8b0] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1011 14:58:15.806838    1707 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3592822065/002/docker-machine-driver-hyperkit
I1011 14:58:18.432707    1707 install.go:79] stdout: 
W1011 14:58:18.432919    1707 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3592822065/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3592822065/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1011 14:58:18.432948    1707 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3592822065/002/docker-machine-driver-hyperkit]
I1011 14:58:18.451062    1707 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3592822065/002/docker-machine-driver-hyperkit]
I1011 14:58:18.465684    1707 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3592822065/002/docker-machine-driver-hyperkit]
I1011 14:58:18.476617    1707 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3592822065/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-075000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.804482291s)

                                                
                                                
-- stdout --
	* [force-systemd-env-075000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-075000" primary control-plane node in "force-systemd-env-075000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-075000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:58:13.757274    4562 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:58:13.757433    4562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:58:13.757439    4562 out.go:358] Setting ErrFile to fd 2...
	I1011 14:58:13.757442    4562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:58:13.757560    4562 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:58:13.758719    4562 out.go:352] Setting JSON to false
	I1011 14:58:13.776865    4562 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5263,"bootTime":1728678630,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 14:58:13.776935    4562 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 14:58:13.783213    4562 out.go:177] * [force-systemd-env-075000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 14:58:13.791191    4562 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 14:58:13.791229    4562 notify.go:220] Checking for updates...
	I1011 14:58:13.798188    4562 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 14:58:13.801112    4562 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 14:58:13.804147    4562 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 14:58:13.807178    4562 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 14:58:13.810069    4562 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1011 14:58:13.813520    4562 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:58:13.813569    4562 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 14:58:13.818107    4562 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 14:58:13.825185    4562 start.go:297] selected driver: qemu2
	I1011 14:58:13.825191    4562 start.go:901] validating driver "qemu2" against <nil>
	I1011 14:58:13.825197    4562 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 14:58:13.827650    4562 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 14:58:13.830110    4562 out.go:177] * Automatically selected the socket_vmnet network
	I1011 14:58:13.831504    4562 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1011 14:58:13.831516    4562 cni.go:84] Creating CNI manager for ""
	I1011 14:58:13.831536    4562 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 14:58:13.831541    4562 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 14:58:13.831570    4562 start.go:340] cluster config:
	{Name:force-systemd-env-075000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-075000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 14:58:13.835734    4562 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 14:58:13.844194    4562 out.go:177] * Starting "force-systemd-env-075000" primary control-plane node in "force-systemd-env-075000" cluster
	I1011 14:58:13.848129    4562 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 14:58:13.848142    4562 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 14:58:13.848149    4562 cache.go:56] Caching tarball of preloaded images
	I1011 14:58:13.848216    4562 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 14:58:13.848221    4562 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 14:58:13.848262    4562 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/force-systemd-env-075000/config.json ...
	I1011 14:58:13.848272    4562 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/force-systemd-env-075000/config.json: {Name:mkb098e1ac493152c72db8569ca99d240c93105b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 14:58:13.848533    4562 start.go:360] acquireMachinesLock for force-systemd-env-075000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:58:13.848575    4562 start.go:364] duration metric: took 36.584µs to acquireMachinesLock for "force-systemd-env-075000"
	I1011 14:58:13.848587    4562 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-075000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-075000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 14:58:13.848610    4562 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 14:58:13.856139    4562 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1011 14:58:13.870802    4562 start.go:159] libmachine.API.Create for "force-systemd-env-075000" (driver="qemu2")
	I1011 14:58:13.870831    4562 client.go:168] LocalClient.Create starting
	I1011 14:58:13.870898    4562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 14:58:13.870933    4562 main.go:141] libmachine: Decoding PEM data...
	I1011 14:58:13.870942    4562 main.go:141] libmachine: Parsing certificate...
	I1011 14:58:13.870986    4562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 14:58:13.871013    4562 main.go:141] libmachine: Decoding PEM data...
	I1011 14:58:13.871021    4562 main.go:141] libmachine: Parsing certificate...
	I1011 14:58:13.871376    4562 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 14:58:14.022634    4562 main.go:141] libmachine: Creating SSH key...
	I1011 14:58:14.126185    4562 main.go:141] libmachine: Creating Disk image...
	I1011 14:58:14.126194    4562 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 14:58:14.126434    4562 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-env-075000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-env-075000/disk.qcow2
	I1011 14:58:14.136635    4562 main.go:141] libmachine: STDOUT: 
	I1011 14:58:14.136652    4562 main.go:141] libmachine: STDERR: 
	I1011 14:58:14.136716    4562 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-env-075000/disk.qcow2 +20000M
	I1011 14:58:14.145590    4562 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 14:58:14.145612    4562 main.go:141] libmachine: STDERR: 
	I1011 14:58:14.145626    4562 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-env-075000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-env-075000/disk.qcow2
	I1011 14:58:14.145631    4562 main.go:141] libmachine: Starting QEMU VM...
	I1011 14:58:14.145641    4562 qemu.go:418] Using hvf for hardware acceleration
	I1011 14:58:14.145674    4562 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-env-075000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-env-075000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-env-075000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:c5:df:9b:b4:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-env-075000/disk.qcow2
	I1011 14:58:14.147573    4562 main.go:141] libmachine: STDOUT: 
	I1011 14:58:14.147589    4562 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 14:58:14.147609    4562 client.go:171] duration metric: took 276.777042ms to LocalClient.Create
	I1011 14:58:16.149781    4562 start.go:128] duration metric: took 2.301177459s to createHost
	I1011 14:58:16.149837    4562 start.go:83] releasing machines lock for "force-systemd-env-075000", held for 2.301284333s
	W1011 14:58:16.149894    4562 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:58:16.164283    4562 out.go:177] * Deleting "force-systemd-env-075000" in qemu2 ...
	W1011 14:58:16.189691    4562 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:58:16.189729    4562 start.go:729] Will try again in 5 seconds ...
	I1011 14:58:21.191868    4562 start.go:360] acquireMachinesLock for force-systemd-env-075000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:58:21.192279    4562 start.go:364] duration metric: took 336.458µs to acquireMachinesLock for "force-systemd-env-075000"
	I1011 14:58:21.192403    4562 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-075000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-075000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 14:58:21.192733    4562 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 14:58:21.201378    4562 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1011 14:58:21.249015    4562 start.go:159] libmachine.API.Create for "force-systemd-env-075000" (driver="qemu2")
	I1011 14:58:21.249053    4562 client.go:168] LocalClient.Create starting
	I1011 14:58:21.249168    4562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 14:58:21.249249    4562 main.go:141] libmachine: Decoding PEM data...
	I1011 14:58:21.249267    4562 main.go:141] libmachine: Parsing certificate...
	I1011 14:58:21.249336    4562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 14:58:21.249396    4562 main.go:141] libmachine: Decoding PEM data...
	I1011 14:58:21.249413    4562 main.go:141] libmachine: Parsing certificate...
	I1011 14:58:21.249968    4562 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 14:58:21.423262    4562 main.go:141] libmachine: Creating SSH key...
	I1011 14:58:21.460262    4562 main.go:141] libmachine: Creating Disk image...
	I1011 14:58:21.460269    4562 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 14:58:21.460479    4562 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-env-075000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-env-075000/disk.qcow2
	I1011 14:58:21.470362    4562 main.go:141] libmachine: STDOUT: 
	I1011 14:58:21.470385    4562 main.go:141] libmachine: STDERR: 
	I1011 14:58:21.470438    4562 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-env-075000/disk.qcow2 +20000M
	I1011 14:58:21.479025    4562 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 14:58:21.479043    4562 main.go:141] libmachine: STDERR: 
	I1011 14:58:21.479071    4562 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-env-075000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-env-075000/disk.qcow2
	I1011 14:58:21.479076    4562 main.go:141] libmachine: Starting QEMU VM...
	I1011 14:58:21.479090    4562 qemu.go:418] Using hvf for hardware acceleration
	I1011 14:58:21.479123    4562 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-env-075000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-env-075000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-env-075000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:dd:72:37:b9:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/force-systemd-env-075000/disk.qcow2
	I1011 14:58:21.480912    4562 main.go:141] libmachine: STDOUT: 
	I1011 14:58:21.480927    4562 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 14:58:21.480939    4562 client.go:171] duration metric: took 231.885292ms to LocalClient.Create
	I1011 14:58:23.483158    4562 start.go:128] duration metric: took 2.290405084s to createHost
	I1011 14:58:23.483246    4562 start.go:83] releasing machines lock for "force-systemd-env-075000", held for 2.290970209s
	W1011 14:58:23.483723    4562 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-075000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-075000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:58:23.496429    4562 out.go:201] 
	W1011 14:58:23.499483    4562 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 14:58:23.499510    4562 out.go:270] * 
	* 
	W1011 14:58:23.502250    4562 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 14:58:23.513418    4562 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-075000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-075000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-075000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (82.089125ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-075000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-075000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-075000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-10-11 14:58:23.612826 -0700 PDT m=+3653.023121376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-075000 -n force-systemd-env-075000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-075000 -n force-systemd-env-075000: exit status 7 (36.654333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-075000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-075000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-075000
--- FAIL: TestForceSystemdEnv (10.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (34.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-044000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-044000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-52lcm" [2b44d371-9596-4991-8073-7283a69c69bb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-52lcm" [2b44d371-9596-4991-8073-7283a69c69bb] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003943s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31256
functional_test.go:1661: error fetching http://192.168.105.4:31256: Get "http://192.168.105.4:31256": dial tcp 192.168.105.4:31256: connect: connection refused
I1011 14:09:09.989828    1707 retry.go:31] will retry after 840.375439ms: Get "http://192.168.105.4:31256": dial tcp 192.168.105.4:31256: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31256: Get "http://192.168.105.4:31256": dial tcp 192.168.105.4:31256: connect: connection refused
I1011 14:09:10.833186    1707 retry.go:31] will retry after 2.091708906s: Get "http://192.168.105.4:31256": dial tcp 192.168.105.4:31256: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31256: Get "http://192.168.105.4:31256": dial tcp 192.168.105.4:31256: connect: connection refused
I1011 14:09:12.927963    1707 retry.go:31] will retry after 2.500745845s: Get "http://192.168.105.4:31256": dial tcp 192.168.105.4:31256: connect: connection refused
E1011 14:09:13.243529    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:31256: Get "http://192.168.105.4:31256": dial tcp 192.168.105.4:31256: connect: connection refused
I1011 14:09:15.432378    1707 retry.go:31] will retry after 3.995489417s: Get "http://192.168.105.4:31256": dial tcp 192.168.105.4:31256: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31256: Get "http://192.168.105.4:31256": dial tcp 192.168.105.4:31256: connect: connection refused
I1011 14:09:19.431548    1707 retry.go:31] will retry after 6.986079959s: Get "http://192.168.105.4:31256": dial tcp 192.168.105.4:31256: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31256: Get "http://192.168.105.4:31256": dial tcp 192.168.105.4:31256: connect: connection refused
I1011 14:09:26.420257    1707 retry.go:31] will retry after 9.476374983s: Get "http://192.168.105.4:31256": dial tcp 192.168.105.4:31256: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31256: Get "http://192.168.105.4:31256": dial tcp 192.168.105.4:31256: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31256: Get "http://192.168.105.4:31256": dial tcp 192.168.105.4:31256: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-044000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-52lcm
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-044000/192.168.105.4
Start Time:       Fri, 11 Oct 2024 14:09:02 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://a5b0cc56cb326041c36b79d90bf544d445e6d968b56db3a102c65b8415f088c9
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Fri, 11 Oct 2024 14:09:16 -0700
Finished:     Fri, 11 Oct 2024 14:09:16 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kl5dr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kl5dr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  33s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-52lcm to functional-044000
Normal   Pulled     19s (x3 over 32s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    19s (x3 over 32s)  kubelet            Created container echoserver-arm
Normal   Started    19s (x3 over 32s)  kubelet            Started container echoserver-arm
Warning  BackOff    3s (x3 over 31s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-52lcm_default(2b44d371-9596-4991-8073-7283a69c69bb)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-044000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-044000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.101.126.192
IPs:                      10.101.126.192
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31256/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-044000 -n functional-044000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                        Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount     | -p functional-044000                                                                                                | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port206435514/001:/mount-9p      |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh findmnt                                                                                       | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT | 11 Oct 24 14:09 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh -- ls                                                                                         | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT | 11 Oct 24 14:09 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh cat                                                                                           | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT | 11 Oct 24 14:09 PDT |
	|           | /mount-9p/test-1728680964928578000                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh stat                                                                                          | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT | 11 Oct 24 14:09 PDT |
	|           | /mount-9p/created-by-test                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh stat                                                                                          | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT | 11 Oct 24 14:09 PDT |
	|           | /mount-9p/created-by-pod                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh sudo                                                                                          | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT | 11 Oct 24 14:09 PDT |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount     | -p functional-044000                                                                                                | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port295655619/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh findmnt                                                                                       | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh findmnt                                                                                       | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT | 11 Oct 24 14:09 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh -- ls                                                                                         | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT | 11 Oct 24 14:09 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh sudo                                                                                          | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT |                     |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount     | -p functional-044000                                                                                                | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup592046441/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-044000                                                                                                | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup592046441/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-044000                                                                                                | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup592046441/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh findmnt                                                                                       | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT |                     |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh findmnt                                                                                       | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT |                     |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh findmnt                                                                                       | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT | 11 Oct 24 14:09 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh findmnt                                                                                       | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT | 11 Oct 24 14:09 PDT |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-044000 ssh findmnt                                                                                       | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT | 11 Oct 24 14:09 PDT |
	|           | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	| mount     | -p functional-044000                                                                                                | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT |                     |
	|           | --kill=true                                                                                                         |                   |         |         |                     |                     |
	| start     | -p functional-044000                                                                                                | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-044000 --dry-run                                                                                      | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-044000                                                                                                | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                  | functional-044000 | jenkins | v1.34.0 | 11 Oct 24 14:09 PDT |                     |
	|           | -p functional-044000                                                                                                |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 14:09:33
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 14:09:33.817638    2963 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:09:33.817782    2963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:09:33.817786    2963 out.go:358] Setting ErrFile to fd 2...
	I1011 14:09:33.817795    2963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:09:33.817937    2963 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:09:33.819424    2963 out.go:352] Setting JSON to false
	I1011 14:09:33.838096    2963 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2343,"bootTime":1728678630,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 14:09:33.838175    2963 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 14:09:33.841693    2963 out.go:177] * [functional-044000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1011 14:09:33.848732    2963 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 14:09:33.848784    2963 notify.go:220] Checking for updates...
	I1011 14:09:33.854681    2963 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 14:09:33.857719    2963 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 14:09:33.859055    2963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 14:09:33.861681    2963 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 14:09:33.864671    2963 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 14:09:33.868015    2963 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:09:33.868257    2963 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 14:09:33.872619    2963 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1011 14:09:33.879692    2963 start.go:297] selected driver: qemu2
	I1011 14:09:33.879698    2963 start.go:901] validating driver "qemu2" against &{Name:functional-044000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-044000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 14:09:33.879763    2963 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 14:09:33.886680    2963 out.go:201] 
	W1011 14:09:33.890721    2963 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1011 14:09:33.894699    2963 out.go:201] 
	
	
	==> Docker <==
	Oct 11 21:09:27 functional-044000 dockerd[5774]: time="2024-10-11T21:09:27.970795202Z" level=warning msg="cleaning up after shim disconnected" id=63fb533023f7b06dfa56d8f25a507cf39a8596f6b2f5e94a692a8219167b2aac namespace=moby
	Oct 11 21:09:27 functional-044000 dockerd[5774]: time="2024-10-11T21:09:27.970799827Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 11 21:09:29 functional-044000 dockerd[5768]: time="2024-10-11T21:09:29.342635799Z" level=info msg="ignoring event" container=f7fe6f9fe81b9a9b31898c285db8da2e793bf5d8b8d4a7647a970880be2dc554 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 21:09:29 functional-044000 dockerd[5774]: time="2024-10-11T21:09:29.342738345Z" level=info msg="shim disconnected" id=f7fe6f9fe81b9a9b31898c285db8da2e793bf5d8b8d4a7647a970880be2dc554 namespace=moby
	Oct 11 21:09:29 functional-044000 dockerd[5774]: time="2024-10-11T21:09:29.342763888Z" level=warning msg="cleaning up after shim disconnected" id=f7fe6f9fe81b9a9b31898c285db8da2e793bf5d8b8d4a7647a970880be2dc554 namespace=moby
	Oct 11 21:09:29 functional-044000 dockerd[5774]: time="2024-10-11T21:09:29.342768138Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 11 21:09:32 functional-044000 dockerd[5774]: time="2024-10-11T21:09:32.226600009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 11 21:09:32 functional-044000 dockerd[5774]: time="2024-10-11T21:09:32.226652928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 11 21:09:32 functional-044000 dockerd[5774]: time="2024-10-11T21:09:32.226911564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 11 21:09:32 functional-044000 dockerd[5774]: time="2024-10-11T21:09:32.227048820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 11 21:09:32 functional-044000 dockerd[5768]: time="2024-10-11T21:09:32.260922266Z" level=info msg="ignoring event" container=4a43eaa8c792d9c80c3e929de703397fa7ae9e237b0185f6ab8429106b49fb5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 21:09:32 functional-044000 dockerd[5774]: time="2024-10-11T21:09:32.261315491Z" level=info msg="shim disconnected" id=4a43eaa8c792d9c80c3e929de703397fa7ae9e237b0185f6ab8429106b49fb5b namespace=moby
	Oct 11 21:09:32 functional-044000 dockerd[5774]: time="2024-10-11T21:09:32.261349701Z" level=warning msg="cleaning up after shim disconnected" id=4a43eaa8c792d9c80c3e929de703397fa7ae9e237b0185f6ab8429106b49fb5b namespace=moby
	Oct 11 21:09:32 functional-044000 dockerd[5774]: time="2024-10-11T21:09:32.261354243Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 11 21:09:34 functional-044000 dockerd[5774]: time="2024-10-11T21:09:34.781532511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 11 21:09:34 functional-044000 dockerd[5774]: time="2024-10-11T21:09:34.781564970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 11 21:09:34 functional-044000 dockerd[5774]: time="2024-10-11T21:09:34.781573054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 11 21:09:34 functional-044000 dockerd[5774]: time="2024-10-11T21:09:34.781601472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 11 21:09:34 functional-044000 dockerd[5774]: time="2024-10-11T21:09:34.801946616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 11 21:09:34 functional-044000 dockerd[5774]: time="2024-10-11T21:09:34.802042662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 11 21:09:34 functional-044000 dockerd[5774]: time="2024-10-11T21:09:34.802068247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 11 21:09:34 functional-044000 dockerd[5774]: time="2024-10-11T21:09:34.802128749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 11 21:09:34 functional-044000 cri-dockerd[6041]: time="2024-10-11T21:09:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e805ed75489852c9755fb04394d990245ae7c5ca8191338edafc47485bc6c627/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 11 21:09:34 functional-044000 cri-dockerd[6041]: time="2024-10-11T21:09:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/071d9ba35b4fe4540646cbd279682cf6c3f75be0da9f811bcaa7a5dc774a8e0f/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 11 21:09:35 functional-044000 dockerd[5768]: time="2024-10-11T21:09:35.085152525Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" spanID=4f639460a5f18eaa traceID=18a24d874edc442385eb8b2ed5390dd9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4a43eaa8c792d       72565bf5bbedf                                                                                         4 seconds ago        Exited              echoserver-arm            3                   f58b61e8ff053       hello-node-64b4f8f9ff-9st5k
	63fb533023f7b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 seconds ago        Exited              mount-munger              0                   f7fe6f9fe81b9       busybox-mount
	73d182fc9a780       nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0                         18 seconds ago       Running             myfrontend                0                   adfab10108443       sp-pod
	a5b0cc56cb326       72565bf5bbedf                                                                                         20 seconds ago       Exited              echoserver-arm            2                   e3d94a545ba83       hello-node-connect-65d86f57f4-52lcm
	faeb847b43931       nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         42 seconds ago       Running             nginx                     0                   4f85201e71790       nginx-svc
	32d6e25f32c7b       2f6c962e7b831                                                                                         About a minute ago   Running             coredns                   2                   73bf048d16733       coredns-7c65d6cfc9-xhs2q
	e5a447ac18562       24a140c548c07                                                                                         About a minute ago   Running             kube-proxy                2                   7df4eb8e12ae2       kube-proxy-zflrn
	dec79d6b8f10a       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   423612df75dcc       storage-provisioner
	3ed7867d4c685       279f381cb3736                                                                                         About a minute ago   Running             kube-controller-manager   2                   6d7bf5268b535       kube-controller-manager-functional-044000
	a002d1c0cc6d9       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   f65656ff0c2cb       etcd-functional-044000
	ecd92488ba852       7f8aa378bb47d                                                                                         About a minute ago   Running             kube-scheduler            2                   58a3f9442f163       kube-scheduler-functional-044000
	4be98ed14a91b       d3f53a98c0a9d                                                                                         About a minute ago   Running             kube-apiserver            0                   1831c8e2c4450       kube-apiserver-functional-044000
	bfbe7a1437533       2f6c962e7b831                                                                                         2 minutes ago        Exited              coredns                   1                   3d930605de506       coredns-7c65d6cfc9-xhs2q
	1885d0b7f35cd       24a140c548c07                                                                                         2 minutes ago        Exited              kube-proxy                1                   78989cc5f954a       kube-proxy-zflrn
	48f05d9201642       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       1                   7d96e1ec1bc22       storage-provisioner
	876aa004ef5ea       7f8aa378bb47d                                                                                         2 minutes ago        Exited              kube-scheduler            1                   37bdfcb9f260a       kube-scheduler-functional-044000
	10acd2b063b4d       279f381cb3736                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   609a407535ba0       kube-controller-manager-functional-044000
	ede21c86d9cab       27e3830e14027                                                                                         2 minutes ago        Exited              etcd                      1                   7848a14f58091       etcd-functional-044000
	
	
	==> coredns [32d6e25f32c7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33703 - 52227 "HINFO IN 4194756391183585823.4446248547915850112. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.193927997s
	[INFO] 10.244.0.1:56613 - 30664 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000104546s
	[INFO] 10.244.0.1:43035 - 17533 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000114964s
	[INFO] 10.244.0.1:8998 - 50 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000034169s
	[INFO] 10.244.0.1:14979 - 63583 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001311479s
	[INFO] 10.244.0.1:19076 - 33211 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000067128s
	[INFO] 10.244.0.1:60657 - 1219 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000098588s
	
	
	==> coredns [bfbe7a143753] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35982 - 49932 "HINFO IN 3289280033751029841.6514557111437591883. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049711187s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-044000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-044000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=functional-044000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_11T14_06_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 21:06:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-044000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:09:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 21:09:18 +0000   Fri, 11 Oct 2024 21:06:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 21:09:18 +0000   Fri, 11 Oct 2024 21:06:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 21:09:18 +0000   Fri, 11 Oct 2024 21:06:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 21:09:18 +0000   Fri, 11 Oct 2024 21:06:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-044000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 1c7c3abb087b4e6fa84a0804877070b8
	  System UUID:                1c7c3abb087b4e6fa84a0804877070b8
	  Boot ID:                    26a9ac22-f02b-4394-acfa-40471d1442fb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-9st5k                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  default                     hello-node-connect-65d86f57f4-52lcm          0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	  kube-system                 coredns-7c65d6cfc9-xhs2q                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     3m6s
	  kube-system                 etcd-functional-044000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         3m12s
	  kube-system                 kube-apiserver-functional-044000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-functional-044000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m12s
	  kube-system                 kube-proxy-zflrn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  kube-system                 kube-scheduler-functional-044000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m12s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m5s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-qm4st    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-fhptm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m5s                   kube-proxy       
	  Normal  Starting                 78s                    kube-proxy       
	  Normal  Starting                 2m4s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    3m12s (x2 over 3m12s)  kubelet          Node functional-044000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  3m12s (x2 over 3m12s)  kubelet          Node functional-044000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     3m12s (x2 over 3m12s)  kubelet          Node functional-044000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m12s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           3m8s                   node-controller  Node functional-044000 event: Registered Node functional-044000 in Controller
	  Normal  NodeReady                3m8s                   kubelet          Node functional-044000 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)    kubelet          Node functional-044000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m8s (x8 over 2m8s)    kubelet          Node functional-044000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m8s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m8s (x7 over 2m8s)    kubelet          Node functional-044000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m2s                   node-controller  Node functional-044000 event: Registered Node functional-044000 in Controller
	  Normal  Starting                 82s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  82s (x8 over 82s)      kubelet          Node functional-044000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x8 over 82s)      kubelet          Node functional-044000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x7 over 82s)      kubelet          Node functional-044000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           76s                    node-controller  Node functional-044000 event: Registered Node functional-044000 in Controller
	
	
	==> dmesg <==
	[  +3.403739] kauditd_printk_skb: 199 callbacks suppressed
	[ +14.634718] systemd-fstab-generator[4853]: Ignoring "noauto" option for root device
	[  +0.055688] kauditd_printk_skb: 33 callbacks suppressed
	[ +12.870254] systemd-fstab-generator[5298]: Ignoring "noauto" option for root device
	[  +0.055598] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.111151] systemd-fstab-generator[5334]: Ignoring "noauto" option for root device
	[  +0.116119] systemd-fstab-generator[5346]: Ignoring "noauto" option for root device
	[  +0.098201] systemd-fstab-generator[5360]: Ignoring "noauto" option for root device
	[Oct11 21:08] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.417925] systemd-fstab-generator[5990]: Ignoring "noauto" option for root device
	[  +0.095806] systemd-fstab-generator[6002]: Ignoring "noauto" option for root device
	[  +0.096860] systemd-fstab-generator[6014]: Ignoring "noauto" option for root device
	[  +0.086191] systemd-fstab-generator[6029]: Ignoring "noauto" option for root device
	[  +0.210801] systemd-fstab-generator[6198]: Ignoring "noauto" option for root device
	[  +1.107492] systemd-fstab-generator[6319]: Ignoring "noauto" option for root device
	[  +1.221331] kauditd_printk_skb: 189 callbacks suppressed
	[ +19.827157] systemd-fstab-generator[7323]: Ignoring "noauto" option for root device
	[  +0.055105] kauditd_printk_skb: 43 callbacks suppressed
	[  +6.444045] kauditd_printk_skb: 28 callbacks suppressed
	[  +8.837385] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.069024] kauditd_printk_skb: 25 callbacks suppressed
	[Oct11 21:09] kauditd_printk_skb: 32 callbacks suppressed
	[  +7.918218] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.221484] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.803597] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [a002d1c0cc6d] <==
	{"level":"info","ts":"2024-10-11T21:08:15.187484Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-10-11T21:08:15.187552Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T21:08:15.187581Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T21:08:15.188800Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T21:08:15.189580Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-11T21:08:15.191149Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-11T21:08:15.191212Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-11T21:08:15.191759Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-11T21:08:15.195124Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-11T21:08:16.233783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-10-11T21:08:16.233948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-11T21:08:16.234018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-11T21:08:16.234051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-10-11T21:08:16.234112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-10-11T21:08:16.234166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-10-11T21:08:16.234212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-10-11T21:08:16.236414Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-044000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-11T21:08:16.236544Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T21:08:16.237114Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-11T21:08:16.237173Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-11T21:08:16.237215Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T21:08:16.238871Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T21:08:16.241329Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-10-11T21:08:16.238871Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T21:08:16.244794Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [ede21c86d9ca] <==
	{"level":"info","ts":"2024-10-11T21:07:30.600114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-11T21:07:30.600144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-10-11T21:07:30.600162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-10-11T21:07:30.600170Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-11T21:07:30.600183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-10-11T21:07:30.600197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-11T21:07:30.601970Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T21:07:30.602187Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T21:07:30.601972Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-044000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-11T21:07:30.602467Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-11T21:07:30.602572Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-11T21:07:30.603446Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T21:07:30.603467Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T21:07:30.604603Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-11T21:07:30.604758Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-10-11T21:08:00.001570Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-11T21:08:00.001595Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-044000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-10-11T21:08:00.001647Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-11T21:08:00.001684Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-11T21:08:00.013674Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-11T21:08:00.013700Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-11T21:08:00.013720Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-10-11T21:08:00.016268Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-11T21:08:00.016312Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-11T21:08:00.016316Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-044000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 21:09:36 up 3 min,  0 users,  load average: 1.22, 0.54, 0.21
	Linux functional-044000 5.10.207 #1 SMP PREEMPT Tue Oct 8 12:02:09 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4be98ed14a91] <==
	I1011 21:08:16.845200       1 shared_informer.go:320] Caches are synced for configmaps
	I1011 21:08:16.845570       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1011 21:08:16.848374       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1011 21:08:16.848380       1 policy_source.go:224] refreshing policies
	I1011 21:08:16.848395       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1011 21:08:16.878357       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1011 21:08:17.744873       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1011 21:08:17.847033       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I1011 21:08:17.847607       1 controller.go:615] quota admission added evaluator for: endpoints
	I1011 21:08:18.258518       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1011 21:08:18.262404       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1011 21:08:18.273310       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1011 21:08:18.280801       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1011 21:08:18.284748       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1011 21:08:20.240823       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1011 21:08:39.653291       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.126.225"}
	I1011 21:08:45.242160       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1011 21:08:45.292585       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.248.153"}
	I1011 21:08:50.432426       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.22.237"}
	I1011 21:09:02.902316       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.126.192"}
	E1011 21:09:16.145493       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49745: use of closed network connection
	E1011 21:09:24.245369       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49750: use of closed network connection
	I1011 21:09:34.378384       1 controller.go:615] quota admission added evaluator for: namespaces
	I1011 21:09:34.453149       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.204.20"}
	I1011 21:09:34.483283       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.99.180"}
	
	
	==> kube-controller-manager [10acd2b063b4] <==
	I1011 21:07:34.441779       1 shared_informer.go:320] Caches are synced for TTL
	I1011 21:07:34.442745       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1011 21:07:34.445640       1 shared_informer.go:320] Caches are synced for PV protection
	I1011 21:07:34.452378       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1011 21:07:34.469490       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1011 21:07:34.469537       1 shared_informer.go:320] Caches are synced for persistent volume
	I1011 21:07:34.469565       1 shared_informer.go:320] Caches are synced for ephemeral
	I1011 21:07:34.469626       1 shared_informer.go:320] Caches are synced for HPA
	I1011 21:07:34.469686       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1011 21:07:34.469821       1 shared_informer.go:320] Caches are synced for deployment
	I1011 21:07:34.471699       1 shared_informer.go:320] Caches are synced for expand
	I1011 21:07:34.472814       1 shared_informer.go:320] Caches are synced for node
	I1011 21:07:34.472878       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1011 21:07:34.472893       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1011 21:07:34.472944       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1011 21:07:34.472956       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1011 21:07:34.473001       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-044000"
	I1011 21:07:34.474903       1 shared_informer.go:320] Caches are synced for resource quota
	I1011 21:07:34.529329       1 shared_informer.go:320] Caches are synced for resource quota
	I1011 21:07:34.622680       1 shared_informer.go:320] Caches are synced for attach detach
	I1011 21:07:34.622972       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1011 21:07:34.652028       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1011 21:07:35.087641       1 shared_informer.go:320] Caches are synced for garbage collector
	I1011 21:07:35.126355       1 shared_informer.go:320] Caches are synced for garbage collector
	I1011 21:07:35.126379       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [3ed7867d4c68] <==
	I1011 21:09:18.405192       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-044000"
	I1011 21:09:19.168532       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="103.88µs"
	I1011 21:09:32.201227       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="947.792µs"
	I1011 21:09:33.337374       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="34.751µs"
	I1011 21:09:34.334948       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="25.501µs"
	I1011 21:09:34.415807       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.009017ms"
	E1011 21:09:34.415828       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1011 21:09:34.416797       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="13.753188ms"
	E1011 21:09:34.416811       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1011 21:09:34.419569       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="2.710828ms"
	E1011 21:09:34.419654       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1011 21:09:34.422215       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.747623ms"
	E1011 21:09:34.422286       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1011 21:09:34.425350       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="3.413567ms"
	E1011 21:09:34.425376       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1011 21:09:34.426096       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="2.61474ms"
	E1011 21:09:34.426109       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1011 21:09:34.441574       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.849634ms"
	I1011 21:09:34.460102       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="18.499146ms"
	I1011 21:09:34.462466       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="15.669273ms"
	I1011 21:09:34.472518       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="12.353084ms"
	I1011 21:09:34.473595       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="1.054172ms"
	I1011 21:09:34.478458       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="15.965994ms"
	I1011 21:09:34.505287       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="26.803595ms"
	I1011 21:09:34.505428       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="112.338µs"
	
	
	==> kube-proxy [1885d0b7f35c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1011 21:07:32.214119       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1011 21:07:32.275000       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1011 21:07:32.275156       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1011 21:07:32.293030       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1011 21:07:32.293049       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1011 21:07:32.293062       1 server_linux.go:169] "Using iptables Proxier"
	I1011 21:07:32.294923       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1011 21:07:32.295021       1 server.go:483] "Version info" version="v1.31.1"
	I1011 21:07:32.295027       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 21:07:32.295762       1 config.go:199] "Starting service config controller"
	I1011 21:07:32.295802       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1011 21:07:32.295816       1 config.go:105] "Starting endpoint slice config controller"
	I1011 21:07:32.295818       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1011 21:07:32.296343       1 config.go:328] "Starting node config controller"
	I1011 21:07:32.297551       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1011 21:07:32.397150       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1011 21:07:32.397150       1 shared_informer.go:320] Caches are synced for service config
	I1011 21:07:32.398212       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e5a447ac1856] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1011 21:08:17.702421       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1011 21:08:17.706682       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1011 21:08:17.706707       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1011 21:08:17.714357       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1011 21:08:17.714372       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1011 21:08:17.714382       1 server_linux.go:169] "Using iptables Proxier"
	I1011 21:08:17.714949       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1011 21:08:17.715059       1 server.go:483] "Version info" version="v1.31.1"
	I1011 21:08:17.715065       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 21:08:17.715504       1 config.go:199] "Starting service config controller"
	I1011 21:08:17.715519       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1011 21:08:17.715531       1 config.go:105] "Starting endpoint slice config controller"
	I1011 21:08:17.715534       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1011 21:08:17.715734       1 config.go:328] "Starting node config controller"
	I1011 21:08:17.715742       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1011 21:08:17.817565       1 shared_informer.go:320] Caches are synced for service config
	I1011 21:08:17.817677       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1011 21:08:17.817542       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [876aa004ef5e] <==
	I1011 21:07:30.297521       1 serving.go:386] Generated self-signed cert in-memory
	W1011 21:07:31.125902       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1011 21:07:31.126022       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1011 21:07:31.126044       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1011 21:07:31.126065       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1011 21:07:31.137835       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1011 21:07:31.137848       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 21:07:31.138723       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1011 21:07:31.138761       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1011 21:07:31.138792       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1011 21:07:31.138762       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1011 21:07:31.239867       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1011 21:07:59.987499       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1011 21:07:59.987528       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1011 21:07:59.987579       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ecd92488ba85] <==
	I1011 21:08:15.482442       1 serving.go:386] Generated self-signed cert in-memory
	W1011 21:08:16.774772       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1011 21:08:16.774806       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1011 21:08:16.774811       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1011 21:08:16.774813       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1011 21:08:16.797832       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1011 21:08:16.798612       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 21:08:16.799740       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1011 21:08:16.799817       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1011 21:08:16.799827       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1011 21:08:16.799834       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1011 21:08:16.901707       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 11 21:09:19 functional-044000 kubelet[6326]: E1011 21:09:19.161737    6326 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-9st5k_default(dbd99461-c7a2-4e17-91ed-f289c0dc3ce3)\"" pod="default/hello-node-64b4f8f9ff-9st5k" podUID="dbd99461-c7a2-4e17-91ed-f289c0dc3ce3"
	Oct 11 21:09:19 functional-044000 kubelet[6326]: I1011 21:09:19.167445    6326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=1.460446373 podStartE2EDuration="2.16742882s" podCreationTimestamp="2024-10-11 21:09:17 +0000 UTC" firstStartedPulling="2024-10-11 21:09:17.545715567 +0000 UTC m=+63.448923271" lastFinishedPulling="2024-10-11 21:09:18.252698015 +0000 UTC m=+64.155905718" observedRunningTime="2024-10-11 21:09:19.12024502 +0000 UTC m=+65.023452723" watchObservedRunningTime="2024-10-11 21:09:19.16742882 +0000 UTC m=+65.070636566"
	Oct 11 21:09:26 functional-044000 kubelet[6326]: I1011 21:09:26.254962    6326 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/54c85afc-2112-4356-b6ce-d6174164fb78-test-volume\") pod \"busybox-mount\" (UID: \"54c85afc-2112-4356-b6ce-d6174164fb78\") " pod="default/busybox-mount"
	Oct 11 21:09:26 functional-044000 kubelet[6326]: I1011 21:09:26.255006    6326 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdt26\" (UniqueName: \"kubernetes.io/projected/54c85afc-2112-4356-b6ce-d6174164fb78-kube-api-access-wdt26\") pod \"busybox-mount\" (UID: \"54c85afc-2112-4356-b6ce-d6174164fb78\") " pod="default/busybox-mount"
	Oct 11 21:09:29 functional-044000 kubelet[6326]: I1011 21:09:29.494735    6326 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdt26\" (UniqueName: \"kubernetes.io/projected/54c85afc-2112-4356-b6ce-d6174164fb78-kube-api-access-wdt26\") pod \"54c85afc-2112-4356-b6ce-d6174164fb78\" (UID: \"54c85afc-2112-4356-b6ce-d6174164fb78\") "
	Oct 11 21:09:29 functional-044000 kubelet[6326]: I1011 21:09:29.494756    6326 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/54c85afc-2112-4356-b6ce-d6174164fb78-test-volume\") pod \"54c85afc-2112-4356-b6ce-d6174164fb78\" (UID: \"54c85afc-2112-4356-b6ce-d6174164fb78\") "
	Oct 11 21:09:29 functional-044000 kubelet[6326]: I1011 21:09:29.494799    6326 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54c85afc-2112-4356-b6ce-d6174164fb78-test-volume" (OuterVolumeSpecName: "test-volume") pod "54c85afc-2112-4356-b6ce-d6174164fb78" (UID: "54c85afc-2112-4356-b6ce-d6174164fb78"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Oct 11 21:09:29 functional-044000 kubelet[6326]: I1011 21:09:29.498389    6326 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54c85afc-2112-4356-b6ce-d6174164fb78-kube-api-access-wdt26" (OuterVolumeSpecName: "kube-api-access-wdt26") pod "54c85afc-2112-4356-b6ce-d6174164fb78" (UID: "54c85afc-2112-4356-b6ce-d6174164fb78"). InnerVolumeSpecName "kube-api-access-wdt26". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 11 21:09:29 functional-044000 kubelet[6326]: I1011 21:09:29.595143    6326 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wdt26\" (UniqueName: \"kubernetes.io/projected/54c85afc-2112-4356-b6ce-d6174164fb78-kube-api-access-wdt26\") on node \"functional-044000\" DevicePath \"\""
	Oct 11 21:09:29 functional-044000 kubelet[6326]: I1011 21:09:29.595167    6326 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/54c85afc-2112-4356-b6ce-d6174164fb78-test-volume\") on node \"functional-044000\" DevicePath \"\""
	Oct 11 21:09:30 functional-044000 kubelet[6326]: I1011 21:09:30.267925    6326 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7fe6f9fe81b9a9b31898c285db8da2e793bf5d8b8d4a7647a970880be2dc554"
	Oct 11 21:09:32 functional-044000 kubelet[6326]: I1011 21:09:32.162235    6326 scope.go:117] "RemoveContainer" containerID="a5b0cc56cb326041c36b79d90bf544d445e6d968b56db3a102c65b8415f088c9"
	Oct 11 21:09:32 functional-044000 kubelet[6326]: E1011 21:09:32.162652    6326 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-52lcm_default(2b44d371-9596-4991-8073-7283a69c69bb)\"" pod="default/hello-node-connect-65d86f57f4-52lcm" podUID="2b44d371-9596-4991-8073-7283a69c69bb"
	Oct 11 21:09:32 functional-044000 kubelet[6326]: I1011 21:09:32.164404    6326 scope.go:117] "RemoveContainer" containerID="67f3671ed7a135b6d225e2199e973c6e61086ba2f737de8e72c832aaee38dcf7"
	Oct 11 21:09:33 functional-044000 kubelet[6326]: I1011 21:09:33.324278    6326 scope.go:117] "RemoveContainer" containerID="67f3671ed7a135b6d225e2199e973c6e61086ba2f737de8e72c832aaee38dcf7"
	Oct 11 21:09:33 functional-044000 kubelet[6326]: I1011 21:09:33.324630    6326 scope.go:117] "RemoveContainer" containerID="4a43eaa8c792d9c80c3e929de703397fa7ae9e237b0185f6ab8429106b49fb5b"
	Oct 11 21:09:33 functional-044000 kubelet[6326]: E1011 21:09:33.324783    6326 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-9st5k_default(dbd99461-c7a2-4e17-91ed-f289c0dc3ce3)\"" pod="default/hello-node-64b4f8f9ff-9st5k" podUID="dbd99461-c7a2-4e17-91ed-f289c0dc3ce3"
	Oct 11 21:09:34 functional-044000 kubelet[6326]: I1011 21:09:34.330640    6326 scope.go:117] "RemoveContainer" containerID="4a43eaa8c792d9c80c3e929de703397fa7ae9e237b0185f6ab8429106b49fb5b"
	Oct 11 21:09:34 functional-044000 kubelet[6326]: E1011 21:09:34.330707    6326 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-9st5k_default(dbd99461-c7a2-4e17-91ed-f289c0dc3ce3)\"" pod="default/hello-node-64b4f8f9ff-9st5k" podUID="dbd99461-c7a2-4e17-91ed-f289c0dc3ce3"
	Oct 11 21:09:34 functional-044000 kubelet[6326]: E1011 21:09:34.440094    6326 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="54c85afc-2112-4356-b6ce-d6174164fb78" containerName="mount-munger"
	Oct 11 21:09:34 functional-044000 kubelet[6326]: I1011 21:09:34.440133    6326 memory_manager.go:354] "RemoveStaleState removing state" podUID="54c85afc-2112-4356-b6ce-d6174164fb78" containerName="mount-munger"
	Oct 11 21:09:34 functional-044000 kubelet[6326]: I1011 21:09:34.541088    6326 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cb4292e4-46a1-4fd6-b900-2c62e3039b82-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-fhptm\" (UID: \"cb4292e4-46a1-4fd6-b900-2c62e3039b82\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-fhptm"
	Oct 11 21:09:34 functional-044000 kubelet[6326]: I1011 21:09:34.541196    6326 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5ec475f2-6342-486a-bedd-e03099a4411a-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-qm4st\" (UID: \"5ec475f2-6342-486a-bedd-e03099a4411a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-qm4st"
	Oct 11 21:09:34 functional-044000 kubelet[6326]: I1011 21:09:34.541226    6326 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxvgf\" (UniqueName: \"kubernetes.io/projected/cb4292e4-46a1-4fd6-b900-2c62e3039b82-kube-api-access-nxvgf\") pod \"kubernetes-dashboard-695b96c756-fhptm\" (UID: \"cb4292e4-46a1-4fd6-b900-2c62e3039b82\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-fhptm"
	Oct 11 21:09:34 functional-044000 kubelet[6326]: I1011 21:09:34.541241    6326 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lz9w\" (UniqueName: \"kubernetes.io/projected/5ec475f2-6342-486a-bedd-e03099a4411a-kube-api-access-7lz9w\") pod \"dashboard-metrics-scraper-c5db448b4-qm4st\" (UID: \"5ec475f2-6342-486a-bedd-e03099a4411a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-qm4st"
	
	
	==> storage-provisioner [48f05d920164] <==
	I1011 21:07:32.177344       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1011 21:07:32.193714       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1011 21:07:32.193739       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1011 21:07:49.613939       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1011 21:07:49.614431       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-044000_cf6277d7-5cff-497b-9e67-48223f1c5560!
	I1011 21:07:49.615331       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"427daf5b-6b59-4d14-9ae1-6e6b53a1bfeb", APIVersion:"v1", ResourceVersion:"526", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-044000_cf6277d7-5cff-497b-9e67-48223f1c5560 became leader
	I1011 21:07:49.715720       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-044000_cf6277d7-5cff-497b-9e67-48223f1c5560!
	
	
	==> storage-provisioner [dec79d6b8f10] <==
	I1011 21:08:17.639314       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1011 21:08:17.648616       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1011 21:08:17.648674       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1011 21:08:35.059605       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1011 21:08:35.059696       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-044000_bf37b796-46cb-4c96-8851-aba16e81af52!
	I1011 21:08:35.060124       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"427daf5b-6b59-4d14-9ae1-6e6b53a1bfeb", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-044000_bf37b796-46cb-4c96-8851-aba16e81af52 became leader
	I1011 21:08:35.160578       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-044000_bf37b796-46cb-4c96-8851-aba16e81af52!
	I1011 21:09:04.958442       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1011 21:09:04.958476       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    4d2fcecb-0076-420c-8452-ee1e379e9bae 350 0 2024-10-11 21:06:30 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-10-11 21:06:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-f994b473-7ec5-40f6-9153-9d32db922569 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  f994b473-7ec5-40f6-9153-9d32db922569 763 0 2024-10-11 21:09:04 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-10-11 21:09:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-10-11 21:09:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1011 21:09:04.959061       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f994b473-7ec5-40f6-9153-9d32db922569", APIVersion:"v1", ResourceVersion:"763", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1011 21:09:04.959195       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-f994b473-7ec5-40f6-9153-9d32db922569" provisioned
	I1011 21:09:04.959206       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1011 21:09:04.959208       1 volume_store.go:212] Trying to save persistentvolume "pvc-f994b473-7ec5-40f6-9153-9d32db922569"
	I1011 21:09:04.965027       1 volume_store.go:219] persistentvolume "pvc-f994b473-7ec5-40f6-9153-9d32db922569" saved
	I1011 21:09:04.965503       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f994b473-7ec5-40f6-9153-9d32db922569", APIVersion:"v1", ResourceVersion:"763", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f994b473-7ec5-40f6-9153-9d32db922569
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-044000 -n functional-044000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-044000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-c5db448b4-qm4st kubernetes-dashboard-695b96c756-fhptm
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-044000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-qm4st kubernetes-dashboard-695b96c756-fhptm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-044000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-qm4st kubernetes-dashboard-695b96c756-fhptm: exit status 1 (40.486166ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-044000/192.168.105.4
	Start Time:       Fri, 11 Oct 2024 14:09:26 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://63fb533023f7b06dfa56d8f25a507cf39a8596f6b2f5e94a692a8219167b2aac
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 11 Oct 2024 14:09:27 -0700
	      Finished:     Fri, 11 Oct 2024 14:09:27 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wdt26 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wdt26:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  11s   default-scheduler  Successfully assigned default/busybox-mount to functional-044000
	  Normal  Pulling    11s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.426s (1.426s including waiting). Image size: 3547125 bytes.
	  Normal  Created    10s   kubelet            Created container mount-munger
	  Normal  Started    10s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-c5db448b4-qm4st" not found
	Error from server (NotFound): pods "kubernetes-dashboard-695b96c756-fhptm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-044000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-qm4st kubernetes-dashboard-695b96c756-fhptm: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (34.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (725.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-737000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E1011 14:11:29.355500    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:11:57.085133    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:13:45.264597    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:13:45.272350    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:13:45.285775    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:13:45.309181    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:13:45.352589    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:13:45.436026    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:13:45.599492    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:13:45.923068    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:13:46.566817    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:13:47.850513    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:13:50.414334    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:13:55.538077    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:14:05.781792    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:14:26.265428    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:15:07.229028    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:16:29.152445    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:16:29.353607    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:18:45.262890    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:19:12.994497    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:21:29.352094    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-737000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 52 (12m5.300575083s)

                                                
                                                
-- stdout --
	* [ha-737000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-737000" primary control-plane node in "ha-737000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Deleting "ha-737000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:09:44.455714    3014 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:09:44.455869    3014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:09:44.455871    3014 out.go:358] Setting ErrFile to fd 2...
	I1011 14:09:44.455874    3014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:09:44.455998    3014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:09:44.457171    3014 out.go:352] Setting JSON to false
	I1011 14:09:44.475732    3014 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2354,"bootTime":1728678630,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 14:09:44.475803    3014 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 14:09:44.480223    3014 out.go:177] * [ha-737000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 14:09:44.488343    3014 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 14:09:44.488387    3014 notify.go:220] Checking for updates...
	I1011 14:09:44.495295    3014 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 14:09:44.498256    3014 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 14:09:44.501326    3014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 14:09:44.504303    3014 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 14:09:44.507318    3014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 14:09:44.510556    3014 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 14:09:44.514308    3014 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 14:09:44.521183    3014 start.go:297] selected driver: qemu2
	I1011 14:09:44.521189    3014 start.go:901] validating driver "qemu2" against <nil>
	I1011 14:09:44.521196    3014 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 14:09:44.523944    3014 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 14:09:44.528320    3014 out.go:177] * Automatically selected the socket_vmnet network
	I1011 14:09:44.531387    3014 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 14:09:44.531414    3014 cni.go:84] Creating CNI manager for ""
	I1011 14:09:44.531442    3014 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1011 14:09:44.531447    3014 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1011 14:09:44.531474    3014 start.go:340] cluster config:
	{Name:ha-737000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-737000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 14:09:44.536059    3014 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 14:09:44.544279    3014 out.go:177] * Starting "ha-737000" primary control-plane node in "ha-737000" cluster
	I1011 14:09:44.548221    3014 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 14:09:44.548243    3014 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 14:09:44.548251    3014 cache.go:56] Caching tarball of preloaded images
	I1011 14:09:44.548339    3014 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 14:09:44.548344    3014 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 14:09:44.548527    3014 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/ha-737000/config.json ...
	I1011 14:09:44.548546    3014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/ha-737000/config.json: {Name:mkfc7f1d92de5d3013bdec8cd57c6ca644e2d7a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 14:09:44.548901    3014 start.go:360] acquireMachinesLock for ha-737000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:09:44.548959    3014 start.go:364] duration metric: took 52.083µs to acquireMachinesLock for "ha-737000"
	I1011 14:09:44.548978    3014 start.go:93] Provisioning new machine with config: &{Name:ha-737000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-737000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 14:09:44.549016    3014 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 14:09:44.555298    3014 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 14:09:44.578979    3014 start.go:159] libmachine.API.Create for "ha-737000" (driver="qemu2")
	I1011 14:09:44.579013    3014 client.go:168] LocalClient.Create starting
	I1011 14:09:44.579086    3014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 14:09:44.579122    3014 main.go:141] libmachine: Decoding PEM data...
	I1011 14:09:44.579135    3014 main.go:141] libmachine: Parsing certificate...
	I1011 14:09:44.579174    3014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 14:09:44.579202    3014 main.go:141] libmachine: Decoding PEM data...
	I1011 14:09:44.579212    3014 main.go:141] libmachine: Parsing certificate...
	I1011 14:09:44.579595    3014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 14:09:44.797536    3014 main.go:141] libmachine: Creating SSH key...
	I1011 14:09:44.874532    3014 main.go:141] libmachine: Creating Disk image...
	I1011 14:09:44.874538    3014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 14:09:44.874755    3014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/disk.qcow2
	I1011 14:09:44.888851    3014 main.go:141] libmachine: STDOUT: 
	I1011 14:09:44.888873    3014 main.go:141] libmachine: STDERR: 
	I1011 14:09:44.888928    3014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/disk.qcow2 +20000M
	I1011 14:09:44.897577    3014 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 14:09:44.897593    3014 main.go:141] libmachine: STDERR: 
	I1011 14:09:44.897605    3014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/disk.qcow2
	I1011 14:09:44.897610    3014 main.go:141] libmachine: Starting QEMU VM...
	I1011 14:09:44.897620    3014 qemu.go:418] Using hvf for hardware acceleration
	I1011 14:09:44.897654    3014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:05:70:e4:04:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/disk.qcow2
	I1011 14:09:44.944587    3014 main.go:141] libmachine: STDOUT: 
	I1011 14:09:44.944613    3014 main.go:141] libmachine: STDERR: 
	I1011 14:09:44.944617    3014 main.go:141] libmachine: Attempt 0
	I1011 14:09:44.944632    3014 main.go:141] libmachine: Searching for ea:5:70:e4:4:1c in /var/db/dhcpd_leases ...
	I1011 14:09:44.944740    3014 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1011 14:09:44.944760    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:54:a8:2:b3:69 ID:1,f2:54:a8:2:b3:69 Lease:0x6709a14c}
	I1011 14:09:44.944777    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:cd:c8:8b:fd:22 ID:1,6a:cd:c8:8b:fd:22 Lease:0x670992fa}
	I1011 14:09:44.944783    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:9a:c2:e0:fb:86:c ID:1,9a:c2:e0:fb:86:c Lease:0x670992c9}
	I1011 14:09:44.944790    3014 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67099906}
	I1011 14:09:46.946945    3014 main.go:141] libmachine: Attempt 1
	I1011 14:09:46.947048    3014 main.go:141] libmachine: Searching for ea:5:70:e4:4:1c in /var/db/dhcpd_leases ...
	I1011 14:09:46.947600    3014 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1011 14:09:46.947652    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:54:a8:2:b3:69 ID:1,f2:54:a8:2:b3:69 Lease:0x6709a14c}
	I1011 14:09:46.947696    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:cd:c8:8b:fd:22 ID:1,6a:cd:c8:8b:fd:22 Lease:0x670992fa}
	I1011 14:09:46.947727    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:9a:c2:e0:fb:86:c ID:1,9a:c2:e0:fb:86:c Lease:0x670992c9}
	I1011 14:09:46.947759    3014 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67099906}
	I1011 14:09:48.949976    3014 main.go:141] libmachine: Attempt 2
	I1011 14:09:48.950082    3014 main.go:141] libmachine: Searching for ea:5:70:e4:4:1c in /var/db/dhcpd_leases ...
	I1011 14:09:48.950519    3014 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1011 14:09:48.950574    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:54:a8:2:b3:69 ID:1,f2:54:a8:2:b3:69 Lease:0x6709a14c}
	I1011 14:09:48.950652    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:cd:c8:8b:fd:22 ID:1,6a:cd:c8:8b:fd:22 Lease:0x670992fa}
	I1011 14:09:48.950687    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:9a:c2:e0:fb:86:c ID:1,9a:c2:e0:fb:86:c Lease:0x670992c9}
	I1011 14:09:48.950718    3014 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67099906}
	I1011 14:09:50.952931    3014 main.go:141] libmachine: Attempt 3
	I1011 14:09:50.952998    3014 main.go:141] libmachine: Searching for ea:5:70:e4:4:1c in /var/db/dhcpd_leases ...
	I1011 14:09:50.953079    3014 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1011 14:09:50.953094    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:54:a8:2:b3:69 ID:1,f2:54:a8:2:b3:69 Lease:0x6709a14c}
	I1011 14:09:50.953101    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:cd:c8:8b:fd:22 ID:1,6a:cd:c8:8b:fd:22 Lease:0x670992fa}
	I1011 14:09:50.953106    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:9a:c2:e0:fb:86:c ID:1,9a:c2:e0:fb:86:c Lease:0x670992c9}
	I1011 14:09:50.953115    3014 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67099906}
	I1011 14:09:52.955173    3014 main.go:141] libmachine: Attempt 4
	I1011 14:09:52.955197    3014 main.go:141] libmachine: Searching for ea:5:70:e4:4:1c in /var/db/dhcpd_leases ...
	I1011 14:09:52.955273    3014 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1011 14:09:52.955282    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:54:a8:2:b3:69 ID:1,f2:54:a8:2:b3:69 Lease:0x6709a14c}
	I1011 14:09:52.955288    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:cd:c8:8b:fd:22 ID:1,6a:cd:c8:8b:fd:22 Lease:0x670992fa}
	I1011 14:09:52.955293    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:9a:c2:e0:fb:86:c ID:1,9a:c2:e0:fb:86:c Lease:0x670992c9}
	I1011 14:09:52.955299    3014 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67099906}
	I1011 14:09:54.957348    3014 main.go:141] libmachine: Attempt 5
	I1011 14:09:54.957358    3014 main.go:141] libmachine: Searching for ea:5:70:e4:4:1c in /var/db/dhcpd_leases ...
	I1011 14:09:54.957401    3014 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1011 14:09:54.957417    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:54:a8:2:b3:69 ID:1,f2:54:a8:2:b3:69 Lease:0x6709a14c}
	I1011 14:09:54.957422    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:cd:c8:8b:fd:22 ID:1,6a:cd:c8:8b:fd:22 Lease:0x670992fa}
	I1011 14:09:54.957427    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:9a:c2:e0:fb:86:c ID:1,9a:c2:e0:fb:86:c Lease:0x670992c9}
	I1011 14:09:54.957433    3014 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67099906}
	I1011 14:09:56.959473    3014 main.go:141] libmachine: Attempt 6
	I1011 14:09:56.959487    3014 main.go:141] libmachine: Searching for ea:5:70:e4:4:1c in /var/db/dhcpd_leases ...
	I1011 14:09:56.959581    3014 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1011 14:09:56.959591    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:54:a8:2:b3:69 ID:1,f2:54:a8:2:b3:69 Lease:0x6709a14c}
	I1011 14:09:56.959600    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:cd:c8:8b:fd:22 ID:1,6a:cd:c8:8b:fd:22 Lease:0x670992fa}
	I1011 14:09:56.959605    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:9a:c2:e0:fb:86:c ID:1,9a:c2:e0:fb:86:c Lease:0x670992c9}
	I1011 14:09:56.959610    3014 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67099906}
	I1011 14:09:58.961688    3014 main.go:141] libmachine: Attempt 7
	I1011 14:09:58.961739    3014 main.go:141] libmachine: Searching for ea:5:70:e4:4:1c in /var/db/dhcpd_leases ...
	I1011 14:09:58.961885    3014 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1011 14:09:58.961895    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ea:5:70:e4:4:1c ID:1,ea:5:70:e4:4:1c Lease:0x6709a235}
	I1011 14:09:58.961904    3014 main.go:141] libmachine: Found match: ea:5:70:e4:4:1c
	I1011 14:09:58.961915    3014 main.go:141] libmachine: IP: 192.168.105.5
	I1011 14:09:58.961920    3014 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I1011 14:15:44.579412    3014 start.go:128] duration metric: took 6m0.03226975s to createHost
	I1011 14:15:44.579495    3014 start.go:83] releasing machines lock for "ha-737000", held for 6m0.032455875s
	W1011 14:15:44.580186    3014 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I1011 14:15:44.589520    3014 out.go:177] * Deleting "ha-737000" in qemu2 ...
	W1011 14:15:44.622282    3014 out.go:270] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1011 14:15:44.622311    3014 start.go:729] Will try again in 5 seconds ...
	I1011 14:15:49.624525    3014 start.go:360] acquireMachinesLock for ha-737000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:15:49.625110    3014 start.go:364] duration metric: took 488.667µs to acquireMachinesLock for "ha-737000"
	I1011 14:15:49.625234    3014 start.go:93] Provisioning new machine with config: &{Name:ha-737000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-737000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 14:15:49.625476    3014 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 14:15:49.633072    3014 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 14:15:49.682494    3014 start.go:159] libmachine.API.Create for "ha-737000" (driver="qemu2")
	I1011 14:15:49.682545    3014 client.go:168] LocalClient.Create starting
	I1011 14:15:49.682774    3014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 14:15:49.682854    3014 main.go:141] libmachine: Decoding PEM data...
	I1011 14:15:49.682875    3014 main.go:141] libmachine: Parsing certificate...
	I1011 14:15:49.682955    3014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 14:15:49.683014    3014 main.go:141] libmachine: Decoding PEM data...
	I1011 14:15:49.683029    3014 main.go:141] libmachine: Parsing certificate...
	I1011 14:15:49.683738    3014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 14:15:49.845890    3014 main.go:141] libmachine: Creating SSH key...
	I1011 14:15:49.945279    3014 main.go:141] libmachine: Creating Disk image...
	I1011 14:15:49.945286    3014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 14:15:49.945489    3014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/disk.qcow2
	I1011 14:15:49.955389    3014 main.go:141] libmachine: STDOUT: 
	I1011 14:15:49.955414    3014 main.go:141] libmachine: STDERR: 
	I1011 14:15:49.955480    3014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/disk.qcow2 +20000M
	I1011 14:15:49.963899    3014 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 14:15:49.963922    3014 main.go:141] libmachine: STDERR: 
	I1011 14:15:49.963932    3014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/disk.qcow2
	I1011 14:15:49.963938    3014 main.go:141] libmachine: Starting QEMU VM...
	I1011 14:15:49.963942    3014 qemu.go:418] Using hvf for hardware acceleration
	I1011 14:15:49.963982    3014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:b5:9a:a6:02:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/disk.qcow2
	I1011 14:15:50.000586    3014 main.go:141] libmachine: STDOUT: 
	I1011 14:15:50.000616    3014 main.go:141] libmachine: STDERR: 
	I1011 14:15:50.000620    3014 main.go:141] libmachine: Attempt 0
	I1011 14:15:50.000634    3014 main.go:141] libmachine: Searching for ba:b5:9a:a6:2:5a in /var/db/dhcpd_leases ...
	I1011 14:15:50.000762    3014 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1011 14:15:50.000772    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ea:5:70:e4:4:1c ID:1,ea:5:70:e4:4:1c Lease:0x6709a235}
	I1011 14:15:50.000790    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:54:a8:2:b3:69 ID:1,f2:54:a8:2:b3:69 Lease:0x6709a14c}
	I1011 14:15:50.000798    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:cd:c8:8b:fd:22 ID:1,6a:cd:c8:8b:fd:22 Lease:0x670992fa}
	I1011 14:15:50.000803    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:9a:c2:e0:fb:86:c ID:1,9a:c2:e0:fb:86:c Lease:0x670992c9}
	I1011 14:15:50.000813    3014 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67099906}
	I1011 14:15:52.003001    3014 main.go:141] libmachine: Attempt 1
	I1011 14:15:52.003077    3014 main.go:141] libmachine: Searching for ba:b5:9a:a6:2:5a in /var/db/dhcpd_leases ...
	I1011 14:15:52.003464    3014 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1011 14:15:52.003516    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ea:5:70:e4:4:1c ID:1,ea:5:70:e4:4:1c Lease:0x6709a235}
	I1011 14:15:52.003545    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:54:a8:2:b3:69 ID:1,f2:54:a8:2:b3:69 Lease:0x6709a14c}
	I1011 14:15:52.003575    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:cd:c8:8b:fd:22 ID:1,6a:cd:c8:8b:fd:22 Lease:0x670992fa}
	I1011 14:15:52.003603    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:9a:c2:e0:fb:86:c ID:1,9a:c2:e0:fb:86:c Lease:0x670992c9}
	I1011 14:15:52.003631    3014 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67099906}
	I1011 14:15:54.005886    3014 main.go:141] libmachine: Attempt 2
	I1011 14:15:54.005972    3014 main.go:141] libmachine: Searching for ba:b5:9a:a6:2:5a in /var/db/dhcpd_leases ...
	I1011 14:15:54.006429    3014 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1011 14:15:54.006480    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ea:5:70:e4:4:1c ID:1,ea:5:70:e4:4:1c Lease:0x6709a235}
	I1011 14:15:54.006508    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:54:a8:2:b3:69 ID:1,f2:54:a8:2:b3:69 Lease:0x6709a14c}
	I1011 14:15:54.006538    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:cd:c8:8b:fd:22 ID:1,6a:cd:c8:8b:fd:22 Lease:0x670992fa}
	I1011 14:15:54.006565    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:9a:c2:e0:fb:86:c ID:1,9a:c2:e0:fb:86:c Lease:0x670992c9}
	I1011 14:15:54.006596    3014 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67099906}
	I1011 14:15:56.008853    3014 main.go:141] libmachine: Attempt 3
	I1011 14:15:56.008906    3014 main.go:141] libmachine: Searching for ba:b5:9a:a6:2:5a in /var/db/dhcpd_leases ...
	I1011 14:15:56.009044    3014 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1011 14:15:56.009063    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ea:5:70:e4:4:1c ID:1,ea:5:70:e4:4:1c Lease:0x6709a235}
	I1011 14:15:56.009073    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:54:a8:2:b3:69 ID:1,f2:54:a8:2:b3:69 Lease:0x6709a14c}
	I1011 14:15:56.009079    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:cd:c8:8b:fd:22 ID:1,6a:cd:c8:8b:fd:22 Lease:0x670992fa}
	I1011 14:15:56.009084    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:9a:c2:e0:fb:86:c ID:1,9a:c2:e0:fb:86:c Lease:0x670992c9}
	I1011 14:15:56.009095    3014 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67099906}
	I1011 14:15:58.011140    3014 main.go:141] libmachine: Attempt 4
	I1011 14:15:58.011153    3014 main.go:141] libmachine: Searching for ba:b5:9a:a6:2:5a in /var/db/dhcpd_leases ...
	I1011 14:15:58.011214    3014 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1011 14:15:58.011223    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ea:5:70:e4:4:1c ID:1,ea:5:70:e4:4:1c Lease:0x6709a235}
	I1011 14:15:58.011230    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:54:a8:2:b3:69 ID:1,f2:54:a8:2:b3:69 Lease:0x6709a14c}
	I1011 14:15:58.011237    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:cd:c8:8b:fd:22 ID:1,6a:cd:c8:8b:fd:22 Lease:0x670992fa}
	I1011 14:15:58.011242    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:9a:c2:e0:fb:86:c ID:1,9a:c2:e0:fb:86:c Lease:0x670992c9}
	I1011 14:15:58.011247    3014 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67099906}
	I1011 14:16:00.013288    3014 main.go:141] libmachine: Attempt 5
	I1011 14:16:00.013313    3014 main.go:141] libmachine: Searching for ba:b5:9a:a6:2:5a in /var/db/dhcpd_leases ...
	I1011 14:16:00.013350    3014 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1011 14:16:00.013356    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ea:5:70:e4:4:1c ID:1,ea:5:70:e4:4:1c Lease:0x6709a235}
	I1011 14:16:00.013361    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:54:a8:2:b3:69 ID:1,f2:54:a8:2:b3:69 Lease:0x6709a14c}
	I1011 14:16:00.013373    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:cd:c8:8b:fd:22 ID:1,6a:cd:c8:8b:fd:22 Lease:0x670992fa}
	I1011 14:16:00.013379    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:9a:c2:e0:fb:86:c ID:1,9a:c2:e0:fb:86:c Lease:0x670992c9}
	I1011 14:16:00.013386    3014 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67099906}
	I1011 14:16:02.015217    3014 main.go:141] libmachine: Attempt 6
	I1011 14:16:02.015236    3014 main.go:141] libmachine: Searching for ba:b5:9a:a6:2:5a in /var/db/dhcpd_leases ...
	I1011 14:16:02.015326    3014 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1011 14:16:02.015338    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ea:5:70:e4:4:1c ID:1,ea:5:70:e4:4:1c Lease:0x6709a235}
	I1011 14:16:02.015345    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:54:a8:2:b3:69 ID:1,f2:54:a8:2:b3:69 Lease:0x6709a14c}
	I1011 14:16:02.015351    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:cd:c8:8b:fd:22 ID:1,6a:cd:c8:8b:fd:22 Lease:0x670992fa}
	I1011 14:16:02.015360    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:9a:c2:e0:fb:86:c ID:1,9a:c2:e0:fb:86:c Lease:0x670992c9}
	I1011 14:16:02.015365    3014 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67099906}
	I1011 14:16:04.017444    3014 main.go:141] libmachine: Attempt 7
	I1011 14:16:04.017471    3014 main.go:141] libmachine: Searching for ba:b5:9a:a6:2:5a in /var/db/dhcpd_leases ...
	I1011 14:16:04.017605    3014 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I1011 14:16:04.017618    3014 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:ba:b5:9a:a6:2:5a ID:1,ba:b5:9a:a6:2:5a Lease:0x6709a3a2}
	I1011 14:16:04.017621    3014 main.go:141] libmachine: Found match: ba:b5:9a:a6:2:5a
	I1011 14:16:04.017649    3014 main.go:141] libmachine: IP: 192.168.105.6
	I1011 14:16:04.017657    3014 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I1011 14:21:49.682922    3014 start.go:128] duration metric: took 6m0.059326958s to createHost
	I1011 14:21:49.682989    3014 start.go:83] releasing machines lock for "ha-737000", held for 6m0.059779667s
	W1011 14:21:49.683237    3014 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-737000" may fix it: creating host: create host timed out in 360.000000 seconds
	* Failed to start qemu2 VM. Running "minikube delete -p ha-737000" may fix it: creating host: create host timed out in 360.000000 seconds
	I1011 14:21:49.691723    3014 out.go:201] 
	W1011 14:21:49.694845    3014 out.go:270] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: creating host: create host timed out in 360.000000 seconds
	W1011 14:21:49.694902    3014 out.go:270] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1011 14:21:49.694948    3014 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1011 14:21:49.701761    3014 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-737000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000: exit status 7 (69.92525ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 14:21:49.791519    3289 status.go:393] failed to get driver ip: parsing IP: 
	E1011 14:21:49.791526    3289 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-737000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StartCluster (725.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (107.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-737000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-737000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (63.548625ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-737000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-737000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-737000 -- rollout status deployment/busybox: exit status 1 (62.822209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-737000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (62.0325ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-737000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:21:49.981175    1707 retry.go:31] will retry after 1.040506778s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.230334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-737000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:21:51.132263    1707 retry.go:31] will retry after 978.019214ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.299458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-737000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:21:52.221940    1707 retry.go:31] will retry after 1.399942291s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (111.282708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-737000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:21:53.735508    1707 retry.go:31] will retry after 5.048222076s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.993542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-737000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:21:58.896161    1707 retry.go:31] will retry after 6.663509202s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.718208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-737000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:22:05.671689    1707 retry.go:31] will retry after 8.473485563s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.933916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-737000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:22:14.255134    1707 retry.go:31] will retry after 16.885936463s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.89525ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-737000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:22:31.252065    1707 retry.go:31] will retry after 9.603813669s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.273292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-737000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:22:40.967452    1707 retry.go:31] will retry after 18.052594724s: failed to retrieve Pod IPs (may be temporary): exit status 1
E1011 14:22:52.443862    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.949209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-737000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:22:59.130366    1707 retry.go:31] will retry after 37.405341508s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (70.790292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-737000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.637166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-737000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-737000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-737000 -- exec  -- nslookup kubernetes.io: exit status 1 (61.988583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-737000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-737000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-737000 -- exec  -- nslookup kubernetes.default: exit status 1 (61.837042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-737000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-737000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-737000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (60.776584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-737000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000: exit status 7 (35.331959ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 14:23:36.889797    3359 status.go:393] failed to get driver ip: parsing IP: 
	E1011 14:23:36.889802    3359 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-737000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DeployApp (107.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-737000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.543959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-737000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000: exit status 7 (34.545916ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 14:23:36.986146    3364 status.go:393] failed to get driver ip: parsing IP: 
	E1011 14:23:36.986151    3364 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-737000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-737000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-737000 -v=7 --alsologtostderr: exit status 50 (48.977833ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:23:37.019441    3366 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:23:37.019723    3366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:23:37.019726    3366 out.go:358] Setting ErrFile to fd 2...
	I1011 14:23:37.019729    3366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:23:37.019860    3366 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:23:37.020092    3366 mustload.go:65] Loading cluster: ha-737000
	I1011 14:23:37.020339    3366 config.go:182] Loaded profile config "ha-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:23:37.021019    3366 host.go:66] Checking if "ha-737000" exists ...
	I1011 14:23:37.025387    3366 out.go:201] 
	W1011 14:23:37.028389    3366 out.go:270] X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-737000 endpoint: failed to lookup ip for ""
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-737000 endpoint: failed to lookup ip for ""
	W1011 14:23:37.028411    3366 out.go:270] * Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	I1011 14:23:37.031364    3366 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-737000 -v=7 --alsologtostderr" : exit status 50
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000: exit status 7 (35.340208ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 14:23:37.070770    3368 status.go:393] failed to get driver ip: parsing IP: 
	E1011 14:23:37.070775    3368 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-737000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-737000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-737000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.349458ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-737000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-737000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-737000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000: exit status 7 (35.5315ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 14:23:37.133993    3371 status.go:393] failed to get driver ip: parsing IP: 
	E1011 14:23:37.134002    3371 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-737000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-737000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-737000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-737000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-737000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-737000" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-737000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-737000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-737000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000: exit status 7 (35.090625ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 14:23:37.222336    3376 status.go:393] failed to get driver ip: parsing IP: 
	E1011 14:23:37.222345    3376 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-737000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-737000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-737000 node stop m02 -v=7 --alsologtostderr: exit status 85 (50.739875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:23:37.290901    3380 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:23:37.291187    3380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:23:37.291190    3380 out.go:358] Setting ErrFile to fd 2...
	I1011 14:23:37.291193    3380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:23:37.291313    3380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:23:37.291577    3380 mustload.go:65] Loading cluster: ha-737000
	I1011 14:23:37.291780    3380 config.go:182] Loaded profile config "ha-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:23:37.296048    3380 out.go:201] 
	W1011 14:23:37.299014    3380 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1011 14:23:37.299019    3380 out.go:270] * 
	* 
	W1011 14:23:37.300467    3380 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 14:23:37.303912    3380 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-737000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-737000 status -v=7 --alsologtostderr
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-737000 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-737000 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-737000 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-737000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000: exit status 7 (34.794708ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 14:23:37.377880    3384 status.go:393] failed to get driver ip: parsing IP: 
	E1011 14:23:37.377889    3384 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-737000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-737000" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-737000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-737000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-737000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000: exit status 7 (34.885625ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 14:23:37.464928    3389 status.go:393] failed to get driver ip: parsing IP: 
	E1011 14:23:37.464933    3389 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-737000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (0.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-737000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-737000 node start m02 -v=7 --alsologtostderr: exit status 85 (50.615833ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:23:37.498921    3391 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:23:37.499209    3391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:23:37.499212    3391 out.go:358] Setting ErrFile to fd 2...
	I1011 14:23:37.499214    3391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:23:37.499361    3391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:23:37.499620    3391 mustload.go:65] Loading cluster: ha-737000
	I1011 14:23:37.499821    3391 config.go:182] Loaded profile config "ha-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:23:37.503984    3391 out.go:201] 
	W1011 14:23:37.506978    3391 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1011 14:23:37.506984    3391 out.go:270] * 
	* 
	W1011 14:23:37.508410    3391 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 14:23:37.511931    3391 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1011 14:23:37.498921    3391 out.go:345] Setting OutFile to fd 1 ...
I1011 14:23:37.499209    3391 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 14:23:37.499212    3391 out.go:358] Setting ErrFile to fd 2...
I1011 14:23:37.499214    3391 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 14:23:37.499361    3391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
I1011 14:23:37.499620    3391 mustload.go:65] Loading cluster: ha-737000
I1011 14:23:37.499821    3391 config.go:182] Loaded profile config "ha-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1011 14:23:37.503984    3391 out.go:201] 
W1011 14:23:37.506978    3391 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1011 14:23:37.506984    3391 out.go:270] * 
* 
W1011 14:23:37.508410    3391 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1011 14:23:37.511931    3391 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-737000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-737000 status -v=7 --alsologtostderr
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-737000 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-737000 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-737000 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-737000 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
ha_test.go:450: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (33.898166ms)

                                                
                                                
** stderr ** 
	E1011 14:23:37.581801    3395 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1011 14:23:37.582436    3395 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1011 14:23:37.583512    3395 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1011 14:23:37.583925    3395 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1011 14:23:37.584981    3395 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	The connection to the server localhost:8080 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
ha_test.go:452: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000: exit status 7 (34.357083ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 14:23:37.619324    3396 status.go:393] failed to get driver ip: parsing IP: 
	E1011 14:23:37.619332    3396 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-737000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (0.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-737000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-737000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-737000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-737000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-737000" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-737000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-737000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-737000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000: exit status 7 (34.39975ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 14:23:37.707590    3401 status.go:393] failed to get driver ip: parsing IP: 
	E1011 14:23:37.707598    3401 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-737000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (966.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-737000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-737000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-737000 -v=7 --alsologtostderr: (5.080110834s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-737000 --wait=true -v=7 --alsologtostderr
E1011 14:23:45.260792    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:26:29.350228    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:28:45.256244    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:30:08.341755    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:31:29.335484    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:33:45.244140    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:36:29.332716    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:38:45.241375    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:39:32.427175    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-737000 --wait=true -v=7 --alsologtostderr: signal: killed (16m1.597756833s)

                                                
                                                
-- stdout --
	* [ha-737000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-737000" primary control-plane node in "ha-737000" cluster
	* Restarting existing qemu2 VM for "ha-737000" ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:23:42.889901    3418 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:23:42.890071    3418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:23:42.890075    3418 out.go:358] Setting ErrFile to fd 2...
	I1011 14:23:42.890078    3418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:23:42.890248    3418 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:23:42.891539    3418 out.go:352] Setting JSON to false
	I1011 14:23:42.911262    3418 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3192,"bootTime":1728678630,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 14:23:42.911350    3418 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 14:23:42.916671    3418 out.go:177] * [ha-737000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 14:23:42.924591    3418 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 14:23:42.924649    3418 notify.go:220] Checking for updates...
	I1011 14:23:42.931549    3418 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 14:23:42.934521    3418 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 14:23:42.937503    3418 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 14:23:42.940527    3418 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 14:23:42.943488    3418 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 14:23:42.946760    3418 config.go:182] Loaded profile config "ha-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:23:42.946814    3418 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 14:23:42.951459    3418 out.go:177] * Using the qemu2 driver based on existing profile
	I1011 14:23:42.958498    3418 start.go:297] selected driver: qemu2
	I1011 14:23:42.958503    3418 start.go:901] validating driver "qemu2" against &{Name:ha-737000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-737000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 14:23:42.958551    3418 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 14:23:42.961106    3418 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 14:23:42.961130    3418 cni.go:84] Creating CNI manager for ""
	I1011 14:23:42.961156    3418 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1011 14:23:42.961208    3418 start.go:340] cluster config:
	{Name:ha-737000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-737000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 14:23:42.965820    3418 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 14:23:42.973383    3418 out.go:177] * Starting "ha-737000" primary control-plane node in "ha-737000" cluster
	I1011 14:23:42.977474    3418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 14:23:42.977491    3418 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 14:23:42.977505    3418 cache.go:56] Caching tarball of preloaded images
	I1011 14:23:42.977586    3418 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 14:23:42.977591    3418 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 14:23:42.977644    3418 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/ha-737000/config.json ...
	I1011 14:23:42.978050    3418 start.go:360] acquireMachinesLock for ha-737000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:23:42.978099    3418 start.go:364] duration metric: took 42.917µs to acquireMachinesLock for "ha-737000"
	I1011 14:23:42.978109    3418 start.go:96] Skipping create...Using existing machine configuration
	I1011 14:23:42.978113    3418 fix.go:54] fixHost starting: 
	I1011 14:23:42.978237    3418 fix.go:112] recreateIfNeeded on ha-737000: state=Stopped err=<nil>
	W1011 14:23:42.978245    3418 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 14:23:42.985464    3418 out.go:177] * Restarting existing qemu2 VM for "ha-737000" ...
	I1011 14:23:42.989547    3418 qemu.go:418] Using hvf for hardware acceleration
	I1011 14:23:42.989592    3418 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:b5:9a:a6:02:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/ha-737000/disk.qcow2
	I1011 14:23:43.030543    3418 main.go:141] libmachine: STDOUT: 
	I1011 14:23:43.030570    3418 main.go:141] libmachine: STDERR: 
	I1011 14:23:43.030574    3418 main.go:141] libmachine: Attempt 0
	I1011 14:23:43.030585    3418 main.go:141] libmachine: Searching for ba:b5:9a:a6:2:5a in /var/db/dhcpd_leases ...
	I1011 14:23:43.030652    3418 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I1011 14:23:43.030673    3418 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:ba:b5:9a:a6:2:5a ID:1,ba:b5:9a:a6:2:5a Lease:0x6709975c}
	I1011 14:23:43.030678    3418 main.go:141] libmachine: Found match: ba:b5:9a:a6:2:5a
	I1011 14:23:43.030690    3418 main.go:141] libmachine: IP: 192.168.105.6
	I1011 14:23:43.030695    3418 main.go:141] libmachine: Waiting for VM to start (ssh -p 0 docker@192.168.105.6)...

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-737000 -v=7 --alsologtostderr" : signal: killed
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-737000
ha_test.go:474: (dbg) Non-zero exit: out/minikube-darwin-arm64 node list -p ha-737000: context deadline exceeded (750ns)
ha_test.go:476: failed to run node list. args "out/minikube-darwin-arm64 node list -p ha-737000" : context deadline exceeded
ha_test.go:481: reported node list is not the same after restart. Before restart: ha-737000	

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-737000 -n ha-737000: exit status 7 (34.559916ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 14:39:44.434444    3837 status.go:393] failed to get driver ip: parsing IP: 
	E1011 14:39:44.434452    3837 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-737000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (966.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (725.26s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-239000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E1011 14:41:29.330522    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:43:45.237174    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:46:29.310124    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:46:48.318793    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:48:45.218202    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:51:29.305974    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-239000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 52 (12m5.263456083s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"afd062b7-355f-4fb5-80d0-d2f0f51a5b7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-239000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9721c73-177c-4c45-aaa7-31fc998b9f4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19749"}}
	{"specversion":"1.0","id":"69a8d05e-63bf-496f-9fe8-d62a4e9c690e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig"}}
	{"specversion":"1.0","id":"9205ca07-3c52-415f-a8b0-f727fdc4ee24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"be316b01-87e7-4214-8e77-c9f1f20b3a4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dc5f8168-8092-48f3-a846-0502f55781d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube"}}
	{"specversion":"1.0","id":"9e384cb5-250b-476f-aeb2-629c7bd0e6b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"497d3ef9-6126-4307-94f5-4469116c447f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1eee89a4-afb5-457f-b506-9753c2561258","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"a3e1e64c-57d4-402d-85ce-9d563445b108","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-239000\" primary control-plane node in \"json-output-239000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e5cf705d-ccbf-48cc-a16f-ba1356057433","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"bf714f09-513e-47aa-95b3-a4406119ab28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-239000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"f571028a-482a-4f3a-926a-0fdc7969632f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"}}
	{"specversion":"1.0","id":"968e74f8-d0e4-45cc-94db-4adcda8defea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"60cb4719-f9eb-4e8e-b7e2-91c9a4c98bc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-239000\" may fix it: creating host: create host timed out in 360.000000 seconds"}}
	{"specversion":"1.0","id":"982ec999-2eef-40a4-95f8-0cda0ef26833","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try 'minikube delete', and disable any conflicting VPN or firewall software","exitcode":"52","issues":"https://github.com/kubernetes/minikube/issues/7072","message":"Failed to start host: creating host: create host timed out in 360.000000 seconds","name":"DRV_CREATE_TIMEOUT","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-239000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 52
--- FAIL: TestJSONOutput/start/Command (725.26s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 9 has already been assigned to another step:
Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
Cannot use for:
Deleting "json-output-239000" in qemu2 ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: afd062b7-355f-4fb5-80d0-d2f0f51a5b7e
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-239000] minikube v1.34.0 on Darwin 15.0.1 (arm64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b9721c73-177c-4c45-aaa7-31fc998b9f4d
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=19749"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 69a8d05e-63bf-496f-9fe8-d62a4e9c690e
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 9205ca07-3c52-415f-a8b0-f727fdc4ee24
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-arm64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: be316b01-87e7-4214-8e77-c9f1f20b3a4a
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: dc5f8168-8092-48f3-a846-0502f55781d5
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 9e384cb5-250b-476f-aeb2-629c7bd0e6b8
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 497d3ef9-6126-4307-94f5-4469116c447f
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the qemu2 driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 1eee89a4-afb5-457f-b506-9753c2561258
datacontenttype: application/json
Data,
{
"message": "Automatically selected the socket_vmnet network"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: a3e1e64c-57d4-402d-85ce-9d563445b108
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-239000\" primary control-plane node in \"json-output-239000\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: e5cf705d-ccbf-48cc-a16f-ba1356057433
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: bf714f09-513e-47aa-95b3-a4406119ab28
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Deleting \"json-output-239000\" in qemu2 ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: f571028a-482a-4f3a-926a-0fdc7969632f
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 968e74f8-d0e4-45cc-94db-4adcda8defea
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 60cb4719-f9eb-4e8e-b7e2-91c9a4c98bc7
datacontenttype: application/json
Data,
{
"message": "Failed to start qemu2 VM. Running \"minikube delete -p json-output-239000\" may fix it: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 982ec999-2eef-40a4-95f8-0cda0ef26833
datacontenttype: application/json
Data,
{
"advice": "Try 'minikube delete', and disable any conflicting VPN or firewall software",
"exitcode": "52",
"issues": "https://github.com/kubernetes/minikube/issues/7072",
"message": "Failed to start host: creating host: create host timed out in 360.000000 seconds",
"name": "DRV_CREATE_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:144: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: afd062b7-355f-4fb5-80d0-d2f0f51a5b7e
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-239000] minikube v1.34.0 on Darwin 15.0.1 (arm64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b9721c73-177c-4c45-aaa7-31fc998b9f4d
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=19749"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 69a8d05e-63bf-496f-9fe8-d62a4e9c690e
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 9205ca07-3c52-415f-a8b0-f727fdc4ee24
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-arm64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: be316b01-87e7-4214-8e77-c9f1f20b3a4a
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: dc5f8168-8092-48f3-a846-0502f55781d5
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 9e384cb5-250b-476f-aeb2-629c7bd0e6b8
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 497d3ef9-6126-4307-94f5-4469116c447f
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the qemu2 driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 1eee89a4-afb5-457f-b506-9753c2561258
datacontenttype: application/json
Data,
{
"message": "Automatically selected the socket_vmnet network"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: a3e1e64c-57d4-402d-85ce-9d563445b108
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-239000\" primary control-plane node in \"json-output-239000\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: e5cf705d-ccbf-48cc-a16f-ba1356057433
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: bf714f09-513e-47aa-95b3-a4406119ab28
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Deleting \"json-output-239000\" in qemu2 ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: f571028a-482a-4f3a-926a-0fdc7969632f
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 968e74f8-d0e4-45cc-94db-4adcda8defea
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 60cb4719-f9eb-4e8e-b7e2-91c9a4c98bc7
datacontenttype: application/json
Data,
{
"message": "Failed to start qemu2 VM. Running \"minikube delete -p json-output-239000\" may fix it: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 982ec999-2eef-40a4-95f8-0cda0ef26833
datacontenttype: application/json
Data,
{
"advice": "Try 'minikube delete', and disable any conflicting VPN or firewall software",
"exitcode": "52",
"issues": "https://github.com/kubernetes/minikube/issues/7072",
"message": "Failed to start host: creating host: create host timed out in 360.000000 seconds",
"name": "DRV_CREATE_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-239000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-239000 --output=json --user=testUser: exit status 50 (87.314417ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a5446aaa-692b-4bdf-bee7-ba6703b24dec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Recreate the cluster by running:\n\t\tminikube delete {{.profileArg}}\n\t\tminikube start {{.profileArg}}","exitcode":"50","issues":"","message":"Unable to get control-plane node json-output-239000 endpoint: failed to lookup ip for \"\"","name":"DRV_CP_ENDPOINT","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-239000 --output=json --user=testUser": exit status 50
--- FAIL: TestJSONOutput/pause/Command (0.09s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.06s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-239000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-239000 --output=json --user=testUser: exit status 50 (58.816209ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node json-output-239000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-239000 --output=json --user=testUser": exit status 50
--- FAIL: TestJSONOutput/unpause/Command (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-278000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
E1011 14:53:45.214499    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-278000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.158142208s)

                                                
                                                
-- stdout --
	* [mount-start-1-278000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-278000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-278000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-278000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-278000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-278000 -n mount-start-1-278000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-278000 -n mount-start-1-278000: exit status 7 (74.00575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-278000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-508000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-508000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.770985917s)

                                                
                                                
-- stdout --
	* [multinode-508000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-508000" primary control-plane node in "multinode-508000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-508000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:53:54.284228    4079 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:53:54.284373    4079 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:53:54.284376    4079 out.go:358] Setting ErrFile to fd 2...
	I1011 14:53:54.284379    4079 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:53:54.284513    4079 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:53:54.285631    4079 out.go:352] Setting JSON to false
	I1011 14:53:54.303148    4079 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5004,"bootTime":1728678630,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 14:53:54.303219    4079 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 14:53:54.308742    4079 out.go:177] * [multinode-508000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 14:53:54.316636    4079 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 14:53:54.316674    4079 notify.go:220] Checking for updates...
	I1011 14:53:54.322074    4079 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 14:53:54.325596    4079 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 14:53:54.328633    4079 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 14:53:54.331723    4079 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 14:53:54.334602    4079 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 14:53:54.337846    4079 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 14:53:54.341608    4079 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 14:53:54.348591    4079 start.go:297] selected driver: qemu2
	I1011 14:53:54.348598    4079 start.go:901] validating driver "qemu2" against <nil>
	I1011 14:53:54.348606    4079 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 14:53:54.351144    4079 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 14:53:54.353657    4079 out.go:177] * Automatically selected the socket_vmnet network
	I1011 14:53:54.356602    4079 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 14:53:54.356620    4079 cni.go:84] Creating CNI manager for ""
	I1011 14:53:54.356642    4079 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1011 14:53:54.356646    4079 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1011 14:53:54.356680    4079 start.go:340] cluster config:
	{Name:multinode-508000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-508000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 14:53:54.361377    4079 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 14:53:54.368483    4079 out.go:177] * Starting "multinode-508000" primary control-plane node in "multinode-508000" cluster
	I1011 14:53:54.372604    4079 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 14:53:54.372623    4079 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 14:53:54.372631    4079 cache.go:56] Caching tarball of preloaded images
	I1011 14:53:54.372701    4079 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 14:53:54.372707    4079 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 14:53:54.372902    4079 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/multinode-508000/config.json ...
	I1011 14:53:54.372913    4079 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/multinode-508000/config.json: {Name:mk4029ec6909cedfe326f431e88db06b65cab15f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 14:53:54.373260    4079 start.go:360] acquireMachinesLock for multinode-508000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:53:54.373309    4079 start.go:364] duration metric: took 43.708µs to acquireMachinesLock for "multinode-508000"
	I1011 14:53:54.373322    4079 start.go:93] Provisioning new machine with config: &{Name:multinode-508000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-508000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 14:53:54.373350    4079 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 14:53:54.376596    4079 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 14:53:54.393125    4079 start.go:159] libmachine.API.Create for "multinode-508000" (driver="qemu2")
	I1011 14:53:54.393158    4079 client.go:168] LocalClient.Create starting
	I1011 14:53:54.393230    4079 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 14:53:54.393269    4079 main.go:141] libmachine: Decoding PEM data...
	I1011 14:53:54.393277    4079 main.go:141] libmachine: Parsing certificate...
	I1011 14:53:54.393329    4079 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 14:53:54.393360    4079 main.go:141] libmachine: Decoding PEM data...
	I1011 14:53:54.393368    4079 main.go:141] libmachine: Parsing certificate...
	I1011 14:53:54.393831    4079 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 14:53:54.545375    4079 main.go:141] libmachine: Creating SSH key...
	I1011 14:53:54.580310    4079 main.go:141] libmachine: Creating Disk image...
	I1011 14:53:54.580315    4079 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 14:53:54.580540    4079 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/disk.qcow2
	I1011 14:53:54.590356    4079 main.go:141] libmachine: STDOUT: 
	I1011 14:53:54.590384    4079 main.go:141] libmachine: STDERR: 
	I1011 14:53:54.590436    4079 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/disk.qcow2 +20000M
	I1011 14:53:54.598781    4079 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 14:53:54.598796    4079 main.go:141] libmachine: STDERR: 
	I1011 14:53:54.598815    4079 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/disk.qcow2
	I1011 14:53:54.598822    4079 main.go:141] libmachine: Starting QEMU VM...
	I1011 14:53:54.598834    4079 qemu.go:418] Using hvf for hardware acceleration
	I1011 14:53:54.598866    4079 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:88:6c:09:62:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/disk.qcow2
	I1011 14:53:54.600643    4079 main.go:141] libmachine: STDOUT: 
	I1011 14:53:54.600656    4079 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 14:53:54.600678    4079 client.go:171] duration metric: took 207.516208ms to LocalClient.Create
	I1011 14:53:56.602828    4079 start.go:128] duration metric: took 2.229486333s to createHost
	I1011 14:53:56.602887    4079 start.go:83] releasing machines lock for "multinode-508000", held for 2.229597917s
	W1011 14:53:56.602981    4079 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:53:56.612286    4079 out.go:177] * Deleting "multinode-508000" in qemu2 ...
	W1011 14:53:56.641650    4079 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:53:56.641685    4079 start.go:729] Will try again in 5 seconds ...
	I1011 14:54:01.643866    4079 start.go:360] acquireMachinesLock for multinode-508000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:54:01.644526    4079 start.go:364] duration metric: took 510.208µs to acquireMachinesLock for "multinode-508000"
	I1011 14:54:01.644651    4079 start.go:93] Provisioning new machine with config: &{Name:multinode-508000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-508000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 14:54:01.644922    4079 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 14:54:01.650545    4079 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 14:54:01.698484    4079 start.go:159] libmachine.API.Create for "multinode-508000" (driver="qemu2")
	I1011 14:54:01.698534    4079 client.go:168] LocalClient.Create starting
	I1011 14:54:01.698679    4079 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 14:54:01.698764    4079 main.go:141] libmachine: Decoding PEM data...
	I1011 14:54:01.698784    4079 main.go:141] libmachine: Parsing certificate...
	I1011 14:54:01.698870    4079 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 14:54:01.698926    4079 main.go:141] libmachine: Decoding PEM data...
	I1011 14:54:01.698937    4079 main.go:141] libmachine: Parsing certificate...
	I1011 14:54:01.699858    4079 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 14:54:01.866803    4079 main.go:141] libmachine: Creating SSH key...
	I1011 14:54:01.958828    4079 main.go:141] libmachine: Creating Disk image...
	I1011 14:54:01.958842    4079 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 14:54:01.959059    4079 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/disk.qcow2
	I1011 14:54:01.969122    4079 main.go:141] libmachine: STDOUT: 
	I1011 14:54:01.969141    4079 main.go:141] libmachine: STDERR: 
	I1011 14:54:01.969191    4079 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/disk.qcow2 +20000M
	I1011 14:54:01.977740    4079 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 14:54:01.977755    4079 main.go:141] libmachine: STDERR: 
	I1011 14:54:01.977766    4079 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/disk.qcow2
	I1011 14:54:01.977770    4079 main.go:141] libmachine: Starting QEMU VM...
	I1011 14:54:01.977779    4079 qemu.go:418] Using hvf for hardware acceleration
	I1011 14:54:01.977805    4079 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:b3:36:6f:80:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/disk.qcow2
	I1011 14:54:01.979558    4079 main.go:141] libmachine: STDOUT: 
	I1011 14:54:01.979571    4079 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 14:54:01.979583    4079 client.go:171] duration metric: took 281.046792ms to LocalClient.Create
	I1011 14:54:03.981784    4079 start.go:128] duration metric: took 2.336868333s to createHost
	I1011 14:54:03.981858    4079 start.go:83] releasing machines lock for "multinode-508000", held for 2.33733125s
	W1011 14:54:03.982225    4079 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-508000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-508000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:54:03.990651    4079 out.go:201] 
	W1011 14:54:03.996842    4079 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 14:54:03.996881    4079 out.go:270] * 
	* 
	W1011 14:54:03.999639    4079 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 14:54:04.008757    4079 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-508000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (71.591708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (119.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (134.16575ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-508000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- rollout status deployment/busybox: exit status 1 (63.390542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (63.417209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:54:04.357801    1707 retry.go:31] will retry after 537.156293ms: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (111.313875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:54:05.008581    1707 retry.go:31] will retry after 2.154988805s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.614458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:54:07.274538    1707 retry.go:31] will retry after 2.437591103s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.765333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:54:09.823210    1707 retry.go:31] will retry after 4.058484086s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.982708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:54:13.991973    1707 retry.go:31] will retry after 3.648100031s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.228416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:54:17.750709    1707 retry.go:31] will retry after 4.285985441s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.147417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:54:22.149204    1707 retry.go:31] will retry after 14.900632132s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.390709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:54:37.159563    1707 retry.go:31] will retry after 14.284522624s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.855083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:54:51.554451    1707 retry.go:31] will retry after 36.753578817s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.225958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1011 14:55:28.418200    1707 retry.go:31] will retry after 34.50220595s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.47375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.773083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- exec  -- nslookup kubernetes.io: exit status 1 (61.591083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- exec  -- nslookup kubernetes.default: exit status 1 (61.87875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (61.717333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (33.6325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (119.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.905958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (34.37875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-508000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-508000 -v 3 --alsologtostderr: exit status 83 (45.741708ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-508000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-508000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:56:03.441704    4165 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:56:03.442121    4165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:03.442124    4165 out.go:358] Setting ErrFile to fd 2...
	I1011 14:56:03.442127    4165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:03.442284    4165 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:56:03.442505    4165 mustload.go:65] Loading cluster: multinode-508000
	I1011 14:56:03.442718    4165 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:56:03.447801    4165 out.go:177] * The control-plane node multinode-508000 host is not running: state=Stopped
	I1011 14:56:03.450821    4165 out.go:177]   To start a cluster, run: "minikube start -p multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-508000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (33.603417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-508000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-508000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (32.2995ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-508000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-508000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-508000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (35.60075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-508000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-508000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-508000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-508000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (36.694375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status --output json --alsologtostderr: exit status 7 (34.237375ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-508000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:56:03.682276    4177 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:56:03.682483    4177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:03.682486    4177 out.go:358] Setting ErrFile to fd 2...
	I1011 14:56:03.682488    4177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:03.682621    4177 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:56:03.682762    4177 out.go:352] Setting JSON to true
	I1011 14:56:03.682773    4177 mustload.go:65] Loading cluster: multinode-508000
	I1011 14:56:03.682827    4177 notify.go:220] Checking for updates...
	I1011 14:56:03.683022    4177 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:56:03.683032    4177 status.go:174] checking status of multinode-508000 ...
	I1011 14:56:03.683283    4177 status.go:371] multinode-508000 host status = "Stopped" (err=<nil>)
	I1011 14:56:03.683286    4177 status.go:384] host is not running, skipping remaining checks
	I1011 14:56:03.683289    4177 status.go:176] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-508000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (34.675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 node stop m03: exit status 85 (51.009583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-508000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status: exit status 7 (34.305667ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr: exit status 7 (34.157583ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:56:03.837453    4185 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:56:03.837637    4185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:03.837640    4185 out.go:358] Setting ErrFile to fd 2...
	I1011 14:56:03.837643    4185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:03.837767    4185 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:56:03.837896    4185 out.go:352] Setting JSON to false
	I1011 14:56:03.837908    4185 mustload.go:65] Loading cluster: multinode-508000
	I1011 14:56:03.837971    4185 notify.go:220] Checking for updates...
	I1011 14:56:03.838121    4185 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:56:03.838134    4185 status.go:174] checking status of multinode-508000 ...
	I1011 14:56:03.838367    4185 status.go:371] multinode-508000 host status = "Stopped" (err=<nil>)
	I1011 14:56:03.838370    4185 status.go:384] host is not running, skipping remaining checks
	I1011 14:56:03.838372    4185 status.go:176] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr": multinode-508000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (34.301708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (51.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 node start m03 -v=7 --alsologtostderr: exit status 85 (49.862ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:56:03.906406    4189 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:56:03.906657    4189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:03.906660    4189 out.go:358] Setting ErrFile to fd 2...
	I1011 14:56:03.906663    4189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:03.906794    4189 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:56:03.907026    4189 mustload.go:65] Loading cluster: multinode-508000
	I1011 14:56:03.907223    4189 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:56:03.911848    4189 out.go:201] 
	W1011 14:56:03.914823    4189 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1011 14:56:03.914828    4189 out.go:270] * 
	* 
	W1011 14:56:03.916323    4189 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 14:56:03.918797    4189 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1011 14:56:03.906406    4189 out.go:345] Setting OutFile to fd 1 ...
I1011 14:56:03.906657    4189 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 14:56:03.906660    4189 out.go:358] Setting ErrFile to fd 2...
I1011 14:56:03.906663    4189 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 14:56:03.906794    4189 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
I1011 14:56:03.907026    4189 mustload.go:65] Loading cluster: multinode-508000
I1011 14:56:03.907223    4189 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1011 14:56:03.911848    4189 out.go:201] 
W1011 14:56:03.914823    4189 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1011 14:56:03.914828    4189 out.go:270] * 
* 
W1011 14:56:03.916323    4189 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1011 14:56:03.918797    4189 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-508000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr: exit status 7 (34.868041ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:56:03.955986    4191 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:56:03.956160    4191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:03.956163    4191 out.go:358] Setting ErrFile to fd 2...
	I1011 14:56:03.956165    4191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:03.956305    4191 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:56:03.956424    4191 out.go:352] Setting JSON to false
	I1011 14:56:03.956437    4191 mustload.go:65] Loading cluster: multinode-508000
	I1011 14:56:03.956628    4191 notify.go:220] Checking for updates...
	I1011 14:56:03.957534    4191 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:56:03.957547    4191 status.go:174] checking status of multinode-508000 ...
	I1011 14:56:03.957769    4191 status.go:371] multinode-508000 host status = "Stopped" (err=<nil>)
	I1011 14:56:03.957773    4191 status.go:384] host is not running, skipping remaining checks
	I1011 14:56:03.957775    4191 status.go:176] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1011 14:56:03.958826    1707 retry.go:31] will retry after 1.476986622s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr: exit status 7 (78.541875ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:56:05.514585    4194 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:56:05.514800    4194 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:05.514804    4194 out.go:358] Setting ErrFile to fd 2...
	I1011 14:56:05.514807    4194 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:05.514950    4194 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:56:05.515089    4194 out.go:352] Setting JSON to false
	I1011 14:56:05.515103    4194 mustload.go:65] Loading cluster: multinode-508000
	I1011 14:56:05.515138    4194 notify.go:220] Checking for updates...
	I1011 14:56:05.515344    4194 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:56:05.515356    4194 status.go:174] checking status of multinode-508000 ...
	I1011 14:56:05.515646    4194 status.go:371] multinode-508000 host status = "Stopped" (err=<nil>)
	I1011 14:56:05.515650    4194 status.go:384] host is not running, skipping remaining checks
	I1011 14:56:05.515653    4194 status.go:176] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1011 14:56:05.516680    1707 retry.go:31] will retry after 1.280958825s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr: exit status 7 (77.662834ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:56:06.875393    4196 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:56:06.875629    4196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:06.875634    4196 out.go:358] Setting ErrFile to fd 2...
	I1011 14:56:06.875637    4196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:06.875822    4196 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:56:06.875983    4196 out.go:352] Setting JSON to false
	I1011 14:56:06.875998    4196 mustload.go:65] Loading cluster: multinode-508000
	I1011 14:56:06.876035    4196 notify.go:220] Checking for updates...
	I1011 14:56:06.876281    4196 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:56:06.876291    4196 status.go:174] checking status of multinode-508000 ...
	I1011 14:56:06.876626    4196 status.go:371] multinode-508000 host status = "Stopped" (err=<nil>)
	I1011 14:56:06.876630    4196 status.go:384] host is not running, skipping remaining checks
	I1011 14:56:06.876633    4196 status.go:176] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1011 14:56:06.877623    1707 retry.go:31] will retry after 2.401648062s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr: exit status 7 (78.456542ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:56:09.357964    4201 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:56:09.358198    4201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:09.358202    4201 out.go:358] Setting ErrFile to fd 2...
	I1011 14:56:09.358205    4201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:09.358365    4201 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:56:09.358520    4201 out.go:352] Setting JSON to false
	I1011 14:56:09.358533    4201 mustload.go:65] Loading cluster: multinode-508000
	I1011 14:56:09.358568    4201 notify.go:220] Checking for updates...
	I1011 14:56:09.358786    4201 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:56:09.358795    4201 status.go:174] checking status of multinode-508000 ...
	I1011 14:56:09.359082    4201 status.go:371] multinode-508000 host status = "Stopped" (err=<nil>)
	I1011 14:56:09.359087    4201 status.go:384] host is not running, skipping remaining checks
	I1011 14:56:09.359089    4201 status.go:176] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1011 14:56:09.360166    1707 retry.go:31] will retry after 2.424891371s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr: exit status 7 (78.59125ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:56:11.863858    4203 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:56:11.864069    4203 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:11.864073    4203 out.go:358] Setting ErrFile to fd 2...
	I1011 14:56:11.864076    4203 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:11.864234    4203 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:56:11.864369    4203 out.go:352] Setting JSON to false
	I1011 14:56:11.864383    4203 mustload.go:65] Loading cluster: multinode-508000
	I1011 14:56:11.864443    4203 notify.go:220] Checking for updates...
	I1011 14:56:11.864657    4203 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:56:11.864667    4203 status.go:174] checking status of multinode-508000 ...
	I1011 14:56:11.864974    4203 status.go:371] multinode-508000 host status = "Stopped" (err=<nil>)
	I1011 14:56:11.864978    4203 status.go:384] host is not running, skipping remaining checks
	I1011 14:56:11.864980    4203 status.go:176] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1011 14:56:11.865952    1707 retry.go:31] will retry after 7.234179853s: exit status 7
E1011 14:56:12.401633    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr: exit status 7 (79.521792ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:56:19.179899    4205 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:56:19.180105    4205 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:19.180109    4205 out.go:358] Setting ErrFile to fd 2...
	I1011 14:56:19.180112    4205 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:19.180257    4205 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:56:19.180410    4205 out.go:352] Setting JSON to false
	I1011 14:56:19.180425    4205 mustload.go:65] Loading cluster: multinode-508000
	I1011 14:56:19.180457    4205 notify.go:220] Checking for updates...
	I1011 14:56:19.180672    4205 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:56:19.180682    4205 status.go:174] checking status of multinode-508000 ...
	I1011 14:56:19.180966    4205 status.go:371] multinode-508000 host status = "Stopped" (err=<nil>)
	I1011 14:56:19.180971    4205 status.go:384] host is not running, skipping remaining checks
	I1011 14:56:19.180973    4205 status.go:176] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1011 14:56:19.181963    1707 retry.go:31] will retry after 10.698128859s: exit status 7
E1011 14:56:29.302133    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr: exit status 7 (78.981709ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:56:29.958001    4207 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:56:29.958232    4207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:29.958236    4207 out.go:358] Setting ErrFile to fd 2...
	I1011 14:56:29.958238    4207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:29.958382    4207 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:56:29.958546    4207 out.go:352] Setting JSON to false
	I1011 14:56:29.958561    4207 mustload.go:65] Loading cluster: multinode-508000
	I1011 14:56:29.958608    4207 notify.go:220] Checking for updates...
	I1011 14:56:29.958832    4207 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:56:29.958843    4207 status.go:174] checking status of multinode-508000 ...
	I1011 14:56:29.959145    4207 status.go:371] multinode-508000 host status = "Stopped" (err=<nil>)
	I1011 14:56:29.959149    4207 status.go:384] host is not running, skipping remaining checks
	I1011 14:56:29.959152    4207 status.go:176] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1011 14:56:29.960210    1707 retry.go:31] will retry after 16.107118278s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr: exit status 7 (78.874833ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:56:46.146297    4212 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:56:46.146527    4212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:46.146531    4212 out.go:358] Setting ErrFile to fd 2...
	I1011 14:56:46.146534    4212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:46.146717    4212 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:56:46.146857    4212 out.go:352] Setting JSON to false
	I1011 14:56:46.146871    4212 mustload.go:65] Loading cluster: multinode-508000
	I1011 14:56:46.146907    4212 notify.go:220] Checking for updates...
	I1011 14:56:46.147121    4212 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:56:46.147130    4212 status.go:174] checking status of multinode-508000 ...
	I1011 14:56:46.147439    4212 status.go:371] multinode-508000 host status = "Stopped" (err=<nil>)
	I1011 14:56:46.147444    4212 status.go:384] host is not running, skipping remaining checks
	I1011 14:56:46.147446    4212 status.go:176] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1011 14:56:46.148521    1707 retry.go:31] will retry after 8.982925886s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr: exit status 7 (80.093833ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:56:55.211759    4214 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:56:55.211970    4214 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:55.211974    4214 out.go:358] Setting ErrFile to fd 2...
	I1011 14:56:55.211977    4214 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:55.212137    4214 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:56:55.212275    4214 out.go:352] Setting JSON to false
	I1011 14:56:55.212288    4214 mustload.go:65] Loading cluster: multinode-508000
	I1011 14:56:55.212321    4214 notify.go:220] Checking for updates...
	I1011 14:56:55.212529    4214 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:56:55.212540    4214 status.go:174] checking status of multinode-508000 ...
	I1011 14:56:55.212812    4214 status.go:371] multinode-508000 host status = "Stopped" (err=<nil>)
	I1011 14:56:55.212817    4214 status.go:384] host is not running, skipping remaining checks
	I1011 14:56:55.212819    4214 status.go:176] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (35.510208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (51.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-508000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-508000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-508000: (2.136493916s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-508000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-508000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.22425875s)

                                                
                                                
-- stdout --
	* [multinode-508000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-508000" primary control-plane node in "multinode-508000" cluster
	* Restarting existing qemu2 VM for "multinode-508000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-508000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:56:57.488788    4232 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:56:57.488967    4232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:57.488972    4232 out.go:358] Setting ErrFile to fd 2...
	I1011 14:56:57.488974    4232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:56:57.489145    4232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:56:57.490391    4232 out.go:352] Setting JSON to false
	I1011 14:56:57.509853    4232 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5187,"bootTime":1728678630,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 14:56:57.509926    4232 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 14:56:57.515329    4232 out.go:177] * [multinode-508000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 14:56:57.522433    4232 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 14:56:57.522492    4232 notify.go:220] Checking for updates...
	I1011 14:56:57.529379    4232 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 14:56:57.532328    4232 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 14:56:57.535385    4232 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 14:56:57.538373    4232 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 14:56:57.541357    4232 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 14:56:57.544719    4232 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:56:57.544775    4232 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 14:56:57.549362    4232 out.go:177] * Using the qemu2 driver based on existing profile
	I1011 14:56:57.556273    4232 start.go:297] selected driver: qemu2
	I1011 14:56:57.556280    4232 start.go:901] validating driver "qemu2" against &{Name:multinode-508000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-508000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 14:56:57.556341    4232 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 14:56:57.558821    4232 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 14:56:57.558844    4232 cni.go:84] Creating CNI manager for ""
	I1011 14:56:57.558871    4232 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1011 14:56:57.558908    4232 start.go:340] cluster config:
	{Name:multinode-508000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-508000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 14:56:57.563356    4232 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 14:56:57.570273    4232 out.go:177] * Starting "multinode-508000" primary control-plane node in "multinode-508000" cluster
	I1011 14:56:57.574343    4232 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 14:56:57.574356    4232 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 14:56:57.574369    4232 cache.go:56] Caching tarball of preloaded images
	I1011 14:56:57.574462    4232 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 14:56:57.574468    4232 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 14:56:57.574514    4232 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/multinode-508000/config.json ...
	I1011 14:56:57.574901    4232 start.go:360] acquireMachinesLock for multinode-508000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:56:57.574946    4232 start.go:364] duration metric: took 39.708µs to acquireMachinesLock for "multinode-508000"
	I1011 14:56:57.574955    4232 start.go:96] Skipping create...Using existing machine configuration
	I1011 14:56:57.574959    4232 fix.go:54] fixHost starting: 
	I1011 14:56:57.575076    4232 fix.go:112] recreateIfNeeded on multinode-508000: state=Stopped err=<nil>
	W1011 14:56:57.575083    4232 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 14:56:57.582292    4232 out.go:177] * Restarting existing qemu2 VM for "multinode-508000" ...
	I1011 14:56:57.586347    4232 qemu.go:418] Using hvf for hardware acceleration
	I1011 14:56:57.586396    4232 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:b3:36:6f:80:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/disk.qcow2
	I1011 14:56:57.588594    4232 main.go:141] libmachine: STDOUT: 
	I1011 14:56:57.588619    4232 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 14:56:57.588646    4232 fix.go:56] duration metric: took 13.6845ms for fixHost
	I1011 14:56:57.588651    4232 start.go:83] releasing machines lock for "multinode-508000", held for 13.701125ms
	W1011 14:56:57.588656    4232 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 14:56:57.588690    4232 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:56:57.588694    4232 start.go:729] Will try again in 5 seconds ...
	I1011 14:57:02.590850    4232 start.go:360] acquireMachinesLock for multinode-508000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:57:02.591232    4232 start.go:364] duration metric: took 298.333µs to acquireMachinesLock for "multinode-508000"
	I1011 14:57:02.591333    4232 start.go:96] Skipping create...Using existing machine configuration
	I1011 14:57:02.591350    4232 fix.go:54] fixHost starting: 
	I1011 14:57:02.592030    4232 fix.go:112] recreateIfNeeded on multinode-508000: state=Stopped err=<nil>
	W1011 14:57:02.592055    4232 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 14:57:02.596495    4232 out.go:177] * Restarting existing qemu2 VM for "multinode-508000" ...
	I1011 14:57:02.604387    4232 qemu.go:418] Using hvf for hardware acceleration
	I1011 14:57:02.604697    4232 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:b3:36:6f:80:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/disk.qcow2
	I1011 14:57:02.614219    4232 main.go:141] libmachine: STDOUT: 
	I1011 14:57:02.614273    4232 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 14:57:02.614334    4232 fix.go:56] duration metric: took 22.985083ms for fixHost
	I1011 14:57:02.614357    4232 start.go:83] releasing machines lock for "multinode-508000", held for 23.102292ms
	W1011 14:57:02.614523    4232 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-508000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-508000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:57:02.619459    4232 out.go:201] 
	W1011 14:57:02.623449    4232 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 14:57:02.623488    4232 out.go:270] * 
	* 
	W1011 14:57:02.626018    4232 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 14:57:02.631016    4232 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-508000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-508000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (35.347ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 node delete m03: exit status 83 (44.167584ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-508000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-508000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-508000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr: exit status 7 (34.5025ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:57:02.831973    4246 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:57:02.832147    4246 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:57:02.832150    4246 out.go:358] Setting ErrFile to fd 2...
	I1011 14:57:02.832152    4246 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:57:02.832294    4246 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:57:02.832426    4246 out.go:352] Setting JSON to false
	I1011 14:57:02.832438    4246 mustload.go:65] Loading cluster: multinode-508000
	I1011 14:57:02.832501    4246 notify.go:220] Checking for updates...
	I1011 14:57:02.832648    4246 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:57:02.832657    4246 status.go:174] checking status of multinode-508000 ...
	I1011 14:57:02.832899    4246 status.go:371] multinode-508000 host status = "Stopped" (err=<nil>)
	I1011 14:57:02.832902    4246 status.go:384] host is not running, skipping remaining checks
	I1011 14:57:02.832904    4246 status.go:176] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (34.296916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-508000 stop: (1.978158125s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status: exit status 7 (68.898708ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr: exit status 7 (36.19575ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:57:04.950228    4262 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:57:04.950392    4262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:57:04.950395    4262 out.go:358] Setting ErrFile to fd 2...
	I1011 14:57:04.950398    4262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:57:04.950524    4262 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:57:04.950648    4262 out.go:352] Setting JSON to false
	I1011 14:57:04.950662    4262 mustload.go:65] Loading cluster: multinode-508000
	I1011 14:57:04.950702    4262 notify.go:220] Checking for updates...
	I1011 14:57:04.950868    4262 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:57:04.950875    4262 status.go:174] checking status of multinode-508000 ...
	I1011 14:57:04.951115    4262 status.go:371] multinode-508000 host status = "Stopped" (err=<nil>)
	I1011 14:57:04.951118    4262 status.go:384] host is not running, skipping remaining checks
	I1011 14:57:04.951120    4262 status.go:176] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr": multinode-508000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr": multinode-508000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (34.629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-508000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-508000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.1892005s)

                                                
                                                
-- stdout --
	* [multinode-508000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-508000" primary control-plane node in "multinode-508000" cluster
	* Restarting existing qemu2 VM for "multinode-508000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-508000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:57:05.019063    4266 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:57:05.019235    4266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:57:05.019238    4266 out.go:358] Setting ErrFile to fd 2...
	I1011 14:57:05.019240    4266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:57:05.019377    4266 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:57:05.020453    4266 out.go:352] Setting JSON to false
	I1011 14:57:05.037959    4266 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5195,"bootTime":1728678630,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 14:57:05.038042    4266 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 14:57:05.043692    4266 out.go:177] * [multinode-508000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 14:57:05.050546    4266 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 14:57:05.050621    4266 notify.go:220] Checking for updates...
	I1011 14:57:05.055905    4266 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 14:57:05.058566    4266 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 14:57:05.061591    4266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 14:57:05.064570    4266 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 14:57:05.067620    4266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 14:57:05.070847    4266 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:57:05.071115    4266 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 14:57:05.075569    4266 out.go:177] * Using the qemu2 driver based on existing profile
	I1011 14:57:05.082537    4266 start.go:297] selected driver: qemu2
	I1011 14:57:05.082543    4266 start.go:901] validating driver "qemu2" against &{Name:multinode-508000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-508000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 14:57:05.082583    4266 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 14:57:05.085024    4266 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 14:57:05.085053    4266 cni.go:84] Creating CNI manager for ""
	I1011 14:57:05.085072    4266 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1011 14:57:05.085122    4266 start.go:340] cluster config:
	{Name:multinode-508000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-508000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 14:57:05.089558    4266 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 14:57:05.097570    4266 out.go:177] * Starting "multinode-508000" primary control-plane node in "multinode-508000" cluster
	I1011 14:57:05.101533    4266 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 14:57:05.101548    4266 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 14:57:05.101561    4266 cache.go:56] Caching tarball of preloaded images
	I1011 14:57:05.101622    4266 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 14:57:05.101629    4266 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 14:57:05.101700    4266 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/multinode-508000/config.json ...
	I1011 14:57:05.102115    4266 start.go:360] acquireMachinesLock for multinode-508000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:57:05.102144    4266 start.go:364] duration metric: took 23.667µs to acquireMachinesLock for "multinode-508000"
	I1011 14:57:05.102154    4266 start.go:96] Skipping create...Using existing machine configuration
	I1011 14:57:05.102158    4266 fix.go:54] fixHost starting: 
	I1011 14:57:05.102270    4266 fix.go:112] recreateIfNeeded on multinode-508000: state=Stopped err=<nil>
	W1011 14:57:05.102278    4266 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 14:57:05.110569    4266 out.go:177] * Restarting existing qemu2 VM for "multinode-508000" ...
	I1011 14:57:05.114455    4266 qemu.go:418] Using hvf for hardware acceleration
	I1011 14:57:05.114493    4266 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:b3:36:6f:80:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/disk.qcow2
	I1011 14:57:05.116672    4266 main.go:141] libmachine: STDOUT: 
	I1011 14:57:05.116694    4266 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 14:57:05.116728    4266 fix.go:56] duration metric: took 14.567625ms for fixHost
	I1011 14:57:05.116734    4266 start.go:83] releasing machines lock for "multinode-508000", held for 14.58525ms
	W1011 14:57:05.116740    4266 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 14:57:05.116786    4266 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:57:05.116792    4266 start.go:729] Will try again in 5 seconds ...
	I1011 14:57:10.119015    4266 start.go:360] acquireMachinesLock for multinode-508000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:57:10.119512    4266 start.go:364] duration metric: took 400.208µs to acquireMachinesLock for "multinode-508000"
	I1011 14:57:10.119668    4266 start.go:96] Skipping create...Using existing machine configuration
	I1011 14:57:10.119689    4266 fix.go:54] fixHost starting: 
	I1011 14:57:10.120458    4266 fix.go:112] recreateIfNeeded on multinode-508000: state=Stopped err=<nil>
	W1011 14:57:10.120485    4266 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 14:57:10.125008    4266 out.go:177] * Restarting existing qemu2 VM for "multinode-508000" ...
	I1011 14:57:10.130253    4266 qemu.go:418] Using hvf for hardware acceleration
	I1011 14:57:10.130471    4266 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:b3:36:6f:80:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/multinode-508000/disk.qcow2
	I1011 14:57:10.141103    4266 main.go:141] libmachine: STDOUT: 
	I1011 14:57:10.141175    4266 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 14:57:10.141331    4266 fix.go:56] duration metric: took 21.642375ms for fixHost
	I1011 14:57:10.141360    4266 start.go:83] releasing machines lock for "multinode-508000", held for 21.823625ms
	W1011 14:57:10.141549    4266 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-508000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-508000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:57:10.149029    4266 out.go:201] 
	W1011 14:57:10.152983    4266 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 14:57:10.153009    4266 out.go:270] * 
	* 
	W1011 14:57:10.155911    4266 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 14:57:10.162972    4266 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-508000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (73.943042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-508000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-508000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-508000-m01 --driver=qemu2 : exit status 80 (9.769951417s)

                                                
                                                
-- stdout --
	* [multinode-508000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-508000-m01" primary control-plane node in "multinode-508000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-508000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-508000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-508000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-508000-m02 --driver=qemu2 : exit status 80 (9.900295959s)

                                                
                                                
-- stdout --
	* [multinode-508000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-508000-m02" primary control-plane node in "multinode-508000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-508000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-508000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-508000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-508000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-508000: exit status 83 (82.731ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-508000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-508000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-508000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (35.325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.91s)

                                                
                                    
x
+
TestPreload (10.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-324000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-324000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.914075041s)

                                                
                                                
-- stdout --
	* [test-preload-324000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-324000" primary control-plane node in "test-preload-324000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-324000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:57:30.302185    4318 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:57:30.302346    4318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:57:30.302350    4318 out.go:358] Setting ErrFile to fd 2...
	I1011 14:57:30.302352    4318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:57:30.302483    4318 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:57:30.303571    4318 out.go:352] Setting JSON to false
	I1011 14:57:30.321073    4318 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5220,"bootTime":1728678630,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 14:57:30.321138    4318 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 14:57:30.327292    4318 out.go:177] * [test-preload-324000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 14:57:30.335227    4318 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 14:57:30.335287    4318 notify.go:220] Checking for updates...
	I1011 14:57:30.342162    4318 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 14:57:30.345210    4318 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 14:57:30.348267    4318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 14:57:30.351224    4318 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 14:57:30.354223    4318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 14:57:30.357649    4318 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:57:30.357695    4318 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 14:57:30.362189    4318 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 14:57:30.369235    4318 start.go:297] selected driver: qemu2
	I1011 14:57:30.369242    4318 start.go:901] validating driver "qemu2" against <nil>
	I1011 14:57:30.369250    4318 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 14:57:30.371824    4318 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 14:57:30.375199    4318 out.go:177] * Automatically selected the socket_vmnet network
	I1011 14:57:30.378308    4318 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 14:57:30.378331    4318 cni.go:84] Creating CNI manager for ""
	I1011 14:57:30.378362    4318 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 14:57:30.378369    4318 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 14:57:30.378398    4318 start.go:340] cluster config:
	{Name:test-preload-324000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-324000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 14:57:30.383010    4318 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 14:57:30.390203    4318 out.go:177] * Starting "test-preload-324000" primary control-plane node in "test-preload-324000" cluster
	I1011 14:57:30.394247    4318 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1011 14:57:30.394331    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/test-preload-324000/config.json ...
	I1011 14:57:30.394353    4318 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/test-preload-324000/config.json: {Name:mkb12d33a6d14550040dc341c6b6343809ad9b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 14:57:30.394367    4318 cache.go:107] acquiring lock: {Name:mk4458181073552f380e5d174c79ce54460686fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 14:57:30.394369    4318 cache.go:107] acquiring lock: {Name:mk819a5671d8e5e01c449d2f6a51d59643ee57fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 14:57:30.394379    4318 cache.go:107] acquiring lock: {Name:mkf9e8a0fae61d0f7b99a5bdd9b79011a2935308 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 14:57:30.394401    4318 cache.go:107] acquiring lock: {Name:mk44b2638bf76f39078cba21cc7493251449957f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 14:57:30.394528    4318 cache.go:107] acquiring lock: {Name:mkd72b7815af9bb53a51e532838ff2717d610d25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 14:57:30.394597    4318 cache.go:107] acquiring lock: {Name:mk5cfad276026480b0c8164cf6e489344fd856fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 14:57:30.394643    4318 cache.go:107] acquiring lock: {Name:mk63700f44a41b5af45a3f22dbbc4e818990340c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 14:57:30.394937    4318 start.go:360] acquireMachinesLock for test-preload-324000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:57:30.394953    4318 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1011 14:57:30.395002    4318 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1011 14:57:30.395014    4318 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1011 14:57:30.395022    4318 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 14:57:30.395054    4318 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1011 14:57:30.395064    4318 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1011 14:57:30.395093    4318 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1011 14:57:30.395096    4318 start.go:364] duration metric: took 143.25µs to acquireMachinesLock for "test-preload-324000"
	I1011 14:57:30.395129    4318 cache.go:107] acquiring lock: {Name:mk0511ea34441b7c34d5118fbb6e7eaf1fe38c10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 14:57:30.395130    4318 start.go:93] Provisioning new machine with config: &{Name:test-preload-324000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-324000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 14:57:30.395224    4318 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 14:57:30.395302    4318 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1011 14:57:30.403251    4318 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 14:57:30.409392    4318 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1011 14:57:30.409414    4318 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1011 14:57:30.409461    4318 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1011 14:57:30.409861    4318 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 14:57:30.411664    4318 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1011 14:57:30.412090    4318 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1011 14:57:30.412120    4318 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1011 14:57:30.412134    4318 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1011 14:57:30.423039    4318 start.go:159] libmachine.API.Create for "test-preload-324000" (driver="qemu2")
	I1011 14:57:30.423065    4318 client.go:168] LocalClient.Create starting
	I1011 14:57:30.423141    4318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 14:57:30.423182    4318 main.go:141] libmachine: Decoding PEM data...
	I1011 14:57:30.423194    4318 main.go:141] libmachine: Parsing certificate...
	I1011 14:57:30.423252    4318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 14:57:30.423285    4318 main.go:141] libmachine: Decoding PEM data...
	I1011 14:57:30.423294    4318 main.go:141] libmachine: Parsing certificate...
	I1011 14:57:30.423684    4318 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 14:57:30.577210    4318 main.go:141] libmachine: Creating SSH key...
	I1011 14:57:30.691635    4318 main.go:141] libmachine: Creating Disk image...
	I1011 14:57:30.691655    4318 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 14:57:30.691896    4318 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/test-preload-324000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/test-preload-324000/disk.qcow2
	I1011 14:57:30.702017    4318 main.go:141] libmachine: STDOUT: 
	I1011 14:57:30.702035    4318 main.go:141] libmachine: STDERR: 
	I1011 14:57:30.702107    4318 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/test-preload-324000/disk.qcow2 +20000M
	I1011 14:57:30.711050    4318 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 14:57:30.711067    4318 main.go:141] libmachine: STDERR: 
	I1011 14:57:30.711079    4318 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/test-preload-324000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/test-preload-324000/disk.qcow2
	I1011 14:57:30.711085    4318 main.go:141] libmachine: Starting QEMU VM...
	I1011 14:57:30.711101    4318 qemu.go:418] Using hvf for hardware acceleration
	I1011 14:57:30.711129    4318 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/test-preload-324000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/test-preload-324000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/test-preload-324000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:a7:51:e1:9e:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/test-preload-324000/disk.qcow2
	I1011 14:57:30.713212    4318 main.go:141] libmachine: STDOUT: 
	I1011 14:57:30.713226    4318 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 14:57:30.713247    4318 client.go:171] duration metric: took 290.180333ms to LocalClient.Create
	I1011 14:57:30.990761    4318 cache.go:162] opening:  /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1011 14:57:30.998859    4318 cache.go:162] opening:  /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1011 14:57:31.001525    4318 cache.go:162] opening:  /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1011 14:57:31.097166    4318 cache.go:162] opening:  /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1011 14:57:31.228143    4318 cache.go:162] opening:  /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1011 14:57:31.236477    4318 cache.go:157] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1011 14:57:31.236501    4318 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 842.145375ms
	I1011 14:57:31.236517    4318 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I1011 14:57:31.249931    4318 cache.go:162] opening:  /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W1011 14:57:31.283231    4318 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1011 14:57:31.283308    4318 cache.go:162] opening:  /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	W1011 14:57:31.600729    4318 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1011 14:57:31.600820    4318 cache.go:162] opening:  /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1011 14:57:32.384419    4318 cache.go:157] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1011 14:57:32.384486    4318 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.990153917s
	I1011 14:57:32.384514    4318 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1011 14:57:32.713540    4318 start.go:128] duration metric: took 2.318322041s to createHost
	I1011 14:57:32.713588    4318 start.go:83] releasing machines lock for "test-preload-324000", held for 2.31850825s
	W1011 14:57:32.713657    4318 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:57:32.731124    4318 out.go:177] * Deleting "test-preload-324000" in qemu2 ...
	W1011 14:57:32.758253    4318 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:57:32.758288    4318 start.go:729] Will try again in 5 seconds ...
	I1011 14:57:32.899948    4318 cache.go:157] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1011 14:57:32.899999    4318 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.504898791s
	I1011 14:57:32.900031    4318 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1011 14:57:34.245007    4318 cache.go:157] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1011 14:57:34.245059    4318 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.850701875s
	I1011 14:57:34.245104    4318 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1011 14:57:35.673177    4318 cache.go:157] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1011 14:57:35.673228    4318 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.278928583s
	I1011 14:57:35.673261    4318 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1011 14:57:35.982483    4318 cache.go:157] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1011 14:57:35.982526    4318 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.58797725s
	I1011 14:57:35.982553    4318 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1011 14:57:35.986440    4318 cache.go:157] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1011 14:57:35.986483    4318 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.592045333s
	I1011 14:57:35.986508    4318 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1011 14:57:37.758477    4318 start.go:360] acquireMachinesLock for test-preload-324000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:57:37.758919    4318 start.go:364] duration metric: took 372.333µs to acquireMachinesLock for "test-preload-324000"
	I1011 14:57:37.759028    4318 start.go:93] Provisioning new machine with config: &{Name:test-preload-324000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-324000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 14:57:37.759262    4318 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 14:57:37.765837    4318 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 14:57:37.816616    4318 start.go:159] libmachine.API.Create for "test-preload-324000" (driver="qemu2")
	I1011 14:57:37.816684    4318 client.go:168] LocalClient.Create starting
	I1011 14:57:37.816830    4318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 14:57:37.816922    4318 main.go:141] libmachine: Decoding PEM data...
	I1011 14:57:37.816943    4318 main.go:141] libmachine: Parsing certificate...
	I1011 14:57:37.817000    4318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 14:57:37.817057    4318 main.go:141] libmachine: Decoding PEM data...
	I1011 14:57:37.817076    4318 main.go:141] libmachine: Parsing certificate...
	I1011 14:57:37.817610    4318 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 14:57:37.984794    4318 main.go:141] libmachine: Creating SSH key...
	I1011 14:57:38.111725    4318 main.go:141] libmachine: Creating Disk image...
	I1011 14:57:38.111734    4318 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 14:57:38.111982    4318 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/test-preload-324000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/test-preload-324000/disk.qcow2
	I1011 14:57:38.122181    4318 main.go:141] libmachine: STDOUT: 
	I1011 14:57:38.122197    4318 main.go:141] libmachine: STDERR: 
	I1011 14:57:38.122254    4318 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/test-preload-324000/disk.qcow2 +20000M
	I1011 14:57:38.131041    4318 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 14:57:38.131057    4318 main.go:141] libmachine: STDERR: 
	I1011 14:57:38.131068    4318 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/test-preload-324000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/test-preload-324000/disk.qcow2
	I1011 14:57:38.131075    4318 main.go:141] libmachine: Starting QEMU VM...
	I1011 14:57:38.131084    4318 qemu.go:418] Using hvf for hardware acceleration
	I1011 14:57:38.131122    4318 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/test-preload-324000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/test-preload-324000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/test-preload-324000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:c0:f0:17:ca:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/test-preload-324000/disk.qcow2
	I1011 14:57:38.133008    4318 main.go:141] libmachine: STDOUT: 
	I1011 14:57:38.133021    4318 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 14:57:38.133034    4318 client.go:171] duration metric: took 316.348125ms to LocalClient.Create
	I1011 14:57:39.599272    4318 cache.go:157] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I1011 14:57:39.599341    4318 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.204900083s
	I1011 14:57:39.599372    4318 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I1011 14:57:39.599418    4318 cache.go:87] Successfully saved all images to host disk.
	I1011 14:57:40.135211    4318 start.go:128] duration metric: took 2.37593075s to createHost
	I1011 14:57:40.135312    4318 start.go:83] releasing machines lock for "test-preload-324000", held for 2.376402458s
	W1011 14:57:40.135563    4318 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-324000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-324000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 14:57:40.150187    4318 out.go:201] 
	W1011 14:57:40.154201    4318 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 14:57:40.154232    4318 out.go:270] * 
	* 
	W1011 14:57:40.157237    4318 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 14:57:40.168156    4318 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-324000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-10-11 14:57:40.18625 -0700 PDT m=+3609.595957876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-324000 -n test-preload-324000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-324000 -n test-preload-324000: exit status 7 (71.199209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-324000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-324000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-324000
--- FAIL: TestPreload (10.07s)

                                                
                                    
x
+
TestScheduledStopUnix (10.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-552000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-552000 --memory=2048 --driver=qemu2 : exit status 80 (9.929492958s)

                                                
                                                
-- stdout --
	* [scheduled-stop-552000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-552000" primary control-plane node in "scheduled-stop-552000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-552000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-552000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-552000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-552000" primary control-plane node in "scheduled-stop-552000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-552000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-552000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-10-11 14:57:50.272686 -0700 PDT m=+3619.682532876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-552000 -n scheduled-stop-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-552000 -n scheduled-stop-552000: exit status 7 (73.997875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-552000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-552000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-552000
--- FAIL: TestScheduledStopUnix (10.09s)

                                                
                                    
x
+
TestSkaffold (13.3s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe662088605 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe662088605 version: (1.018383667s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-785000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-785000 --memory=2600 --driver=qemu2 : exit status 80 (9.932615916s)

                                                
                                                
-- stdout --
	* [skaffold-785000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-785000" primary control-plane node in "skaffold-785000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-785000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-785000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-785000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-785000" primary control-plane node in "skaffold-785000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-785000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-785000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-10-11 14:58:03.576466 -0700 PDT m=+3632.986490084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-785000 -n skaffold-785000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-785000 -n skaffold-785000: exit status 7 (67.541459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-785000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-785000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-785000
--- FAIL: TestSkaffold (13.30s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (605.62s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2913558501 start -p running-upgrade-130000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2913558501 start -p running-upgrade-130000 --memory=2200 --vm-driver=qemu2 : (1m0.04903475s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-130000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E1011 15:01:29.288204    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-130000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m31.291552708s)

                                                
                                                
-- stdout --
	* [running-upgrade-130000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-130000" primary control-plane node in "running-upgrade-130000" cluster
	* Updating the running qemu2 "running-upgrade-130000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:59:46.120567    4700 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:59:46.120969    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:59:46.120973    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 14:59:46.120976    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:59:46.121145    4700 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:59:46.122289    4700 out.go:352] Setting JSON to false
	I1011 14:59:46.141129    4700 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5356,"bootTime":1728678630,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 14:59:46.141194    4700 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 14:59:46.146368    4700 out.go:177] * [running-upgrade-130000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 14:59:46.154356    4700 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 14:59:46.154414    4700 notify.go:220] Checking for updates...
	I1011 14:59:46.165297    4700 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 14:59:46.170255    4700 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 14:59:46.173323    4700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 14:59:46.177153    4700 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 14:59:46.180266    4700 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 14:59:46.183558    4700 config.go:182] Loaded profile config "running-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 14:59:46.187294    4700 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1011 14:59:46.190361    4700 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 14:59:46.194288    4700 out.go:177] * Using the qemu2 driver based on existing profile
	I1011 14:59:46.201301    4700 start.go:297] selected driver: qemu2
	I1011 14:59:46.201307    4700 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-130000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57235 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-130000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1011 14:59:46.201349    4700 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 14:59:46.204018    4700 cni.go:84] Creating CNI manager for ""
	I1011 14:59:46.204045    4700 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 14:59:46.204064    4700 start.go:340] cluster config:
	{Name:running-upgrade-130000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57235 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-130000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1011 14:59:46.204118    4700 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 14:59:46.212217    4700 out.go:177] * Starting "running-upgrade-130000" primary control-plane node in "running-upgrade-130000" cluster
	I1011 14:59:46.216309    4700 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1011 14:59:46.216328    4700 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1011 14:59:46.216336    4700 cache.go:56] Caching tarball of preloaded images
	I1011 14:59:46.216392    4700 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 14:59:46.216397    4700 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1011 14:59:46.216447    4700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/config.json ...
	I1011 14:59:46.216813    4700 start.go:360] acquireMachinesLock for running-upgrade-130000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 14:59:46.216846    4700 start.go:364] duration metric: took 25.708µs to acquireMachinesLock for "running-upgrade-130000"
	I1011 14:59:46.216856    4700 start.go:96] Skipping create...Using existing machine configuration
	I1011 14:59:46.216861    4700 fix.go:54] fixHost starting: 
	I1011 14:59:46.217470    4700 fix.go:112] recreateIfNeeded on running-upgrade-130000: state=Running err=<nil>
	W1011 14:59:46.217480    4700 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 14:59:46.220314    4700 out.go:177] * Updating the running qemu2 "running-upgrade-130000" VM ...
	I1011 14:59:46.228085    4700 machine.go:93] provisionDockerMachine start ...
	I1011 14:59:46.228142    4700 main.go:141] libmachine: Using SSH client type: native
	I1011 14:59:46.228251    4700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c0a480] 0x102c0ccc0 <nil>  [] 0s} localhost 57203 <nil> <nil>}
	I1011 14:59:46.228255    4700 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 14:59:46.286497    4700 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-130000
	
	I1011 14:59:46.286511    4700 buildroot.go:166] provisioning hostname "running-upgrade-130000"
	I1011 14:59:46.286561    4700 main.go:141] libmachine: Using SSH client type: native
	I1011 14:59:46.286670    4700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c0a480] 0x102c0ccc0 <nil>  [] 0s} localhost 57203 <nil> <nil>}
	I1011 14:59:46.286679    4700 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-130000 && echo "running-upgrade-130000" | sudo tee /etc/hostname
	I1011 14:59:46.351142    4700 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-130000
	
	I1011 14:59:46.351206    4700 main.go:141] libmachine: Using SSH client type: native
	I1011 14:59:46.351317    4700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c0a480] 0x102c0ccc0 <nil>  [] 0s} localhost 57203 <nil> <nil>}
	I1011 14:59:46.351325    4700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-130000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-130000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-130000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 14:59:46.409839    4700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 14:59:46.409851    4700 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19749-1186/.minikube CaCertPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19749-1186/.minikube}
	I1011 14:59:46.409866    4700 buildroot.go:174] setting up certificates
	I1011 14:59:46.409871    4700 provision.go:84] configureAuth start
	I1011 14:59:46.409878    4700 provision.go:143] copyHostCerts
	I1011 14:59:46.409938    4700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19749-1186/.minikube/cert.pem, removing ...
	I1011 14:59:46.409943    4700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19749-1186/.minikube/cert.pem
	I1011 14:59:46.410052    4700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19749-1186/.minikube/cert.pem (1123 bytes)
	I1011 14:59:46.410225    4700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19749-1186/.minikube/key.pem, removing ...
	I1011 14:59:46.410228    4700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19749-1186/.minikube/key.pem
	I1011 14:59:46.410273    4700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19749-1186/.minikube/key.pem (1675 bytes)
	I1011 14:59:46.410392    4700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.pem, removing ...
	I1011 14:59:46.410395    4700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.pem
	I1011 14:59:46.410434    4700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.pem (1078 bytes)
	I1011 14:59:46.410530    4700 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-130000 san=[127.0.0.1 localhost minikube running-upgrade-130000]
	I1011 14:59:46.712991    4700 provision.go:177] copyRemoteCerts
	I1011 14:59:46.713056    4700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 14:59:46.713066    4700 sshutil.go:53] new ssh client: &{IP:localhost Port:57203 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/running-upgrade-130000/id_rsa Username:docker}
	I1011 14:59:46.749504    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1011 14:59:46.757491    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 14:59:46.764567    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1011 14:59:46.773905    4700 provision.go:87] duration metric: took 364.030834ms to configureAuth
	I1011 14:59:46.773918    4700 buildroot.go:189] setting minikube options for container-runtime
	I1011 14:59:46.774046    4700 config.go:182] Loaded profile config "running-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 14:59:46.774091    4700 main.go:141] libmachine: Using SSH client type: native
	I1011 14:59:46.774182    4700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c0a480] 0x102c0ccc0 <nil>  [] 0s} localhost 57203 <nil> <nil>}
	I1011 14:59:46.774187    4700 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1011 14:59:46.839207    4700 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1011 14:59:46.839216    4700 buildroot.go:70] root file system type: tmpfs
	I1011 14:59:46.839270    4700 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1011 14:59:46.839329    4700 main.go:141] libmachine: Using SSH client type: native
	I1011 14:59:46.839434    4700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c0a480] 0x102c0ccc0 <nil>  [] 0s} localhost 57203 <nil> <nil>}
	I1011 14:59:46.839467    4700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1011 14:59:46.902811    4700 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1011 14:59:46.902880    4700 main.go:141] libmachine: Using SSH client type: native
	I1011 14:59:46.903008    4700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c0a480] 0x102c0ccc0 <nil>  [] 0s} localhost 57203 <nil> <nil>}
	I1011 14:59:46.903017    4700 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1011 14:59:46.965297    4700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 14:59:46.965308    4700 machine.go:96] duration metric: took 737.236083ms to provisionDockerMachine
	I1011 14:59:46.965314    4700 start.go:293] postStartSetup for "running-upgrade-130000" (driver="qemu2")
	I1011 14:59:46.965320    4700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 14:59:46.965397    4700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 14:59:46.965407    4700 sshutil.go:53] new ssh client: &{IP:localhost Port:57203 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/running-upgrade-130000/id_rsa Username:docker}
	I1011 14:59:46.996896    4700 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 14:59:46.998268    4700 info.go:137] Remote host: Buildroot 2021.02.12
	I1011 14:59:46.998276    4700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19749-1186/.minikube/addons for local assets ...
	I1011 14:59:46.998347    4700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19749-1186/.minikube/files for local assets ...
	I1011 14:59:46.998445    4700 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19749-1186/.minikube/files/etc/ssl/certs/17072.pem -> 17072.pem in /etc/ssl/certs
	I1011 14:59:46.998542    4700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 14:59:47.001189    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/files/etc/ssl/certs/17072.pem --> /etc/ssl/certs/17072.pem (1708 bytes)
	I1011 14:59:47.008271    4700 start.go:296] duration metric: took 42.953125ms for postStartSetup
	I1011 14:59:47.008286    4700 fix.go:56] duration metric: took 791.446834ms for fixHost
	I1011 14:59:47.008327    4700 main.go:141] libmachine: Using SSH client type: native
	I1011 14:59:47.008429    4700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c0a480] 0x102c0ccc0 <nil>  [] 0s} localhost 57203 <nil> <nil>}
	I1011 14:59:47.008433    4700 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 14:59:47.071319    4700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728683987.350347305
	
	I1011 14:59:47.071330    4700 fix.go:216] guest clock: 1728683987.350347305
	I1011 14:59:47.071334    4700 fix.go:229] Guest: 2024-10-11 14:59:47.350347305 -0700 PDT Remote: 2024-10-11 14:59:47.008288 -0700 PDT m=+0.910258876 (delta=342.059305ms)
	I1011 14:59:47.071345    4700 fix.go:200] guest clock delta is within tolerance: 342.059305ms
	I1011 14:59:47.071348    4700 start.go:83] releasing machines lock for "running-upgrade-130000", held for 854.520417ms
	I1011 14:59:47.071426    4700 ssh_runner.go:195] Run: cat /version.json
	I1011 14:59:47.071439    4700 sshutil.go:53] new ssh client: &{IP:localhost Port:57203 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/running-upgrade-130000/id_rsa Username:docker}
	I1011 14:59:47.071427    4700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 14:59:47.071491    4700 sshutil.go:53] new ssh client: &{IP:localhost Port:57203 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/running-upgrade-130000/id_rsa Username:docker}
	W1011 14:59:47.071958    4700 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:57335->127.0.0.1:57203: read: connection reset by peer
	I1011 14:59:47.071975    4700 retry.go:31] will retry after 275.274064ms: ssh: handshake failed: read tcp 127.0.0.1:57335->127.0.0.1:57203: read: connection reset by peer
	W1011 14:59:47.383927    4700 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1011 14:59:47.383997    4700 ssh_runner.go:195] Run: systemctl --version
	I1011 14:59:47.385986    4700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 14:59:47.387929    4700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 14:59:47.387959    4700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1011 14:59:47.390978    4700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1011 14:59:47.395242    4700 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 14:59:47.395250    4700 start.go:495] detecting cgroup driver to use...
	I1011 14:59:47.395320    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 14:59:47.400508    4700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1011 14:59:47.403535    4700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1011 14:59:47.406659    4700 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1011 14:59:47.406685    4700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1011 14:59:47.410280    4700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 14:59:47.413710    4700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1011 14:59:47.417511    4700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 14:59:47.420453    4700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 14:59:47.423503    4700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1011 14:59:47.426833    4700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1011 14:59:47.430238    4700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1011 14:59:47.433608    4700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 14:59:47.436438    4700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 14:59:47.439115    4700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 14:59:47.525791    4700 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1011 14:59:47.532940    4700 start.go:495] detecting cgroup driver to use...
	I1011 14:59:47.533024    4700 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1011 14:59:47.541812    4700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 14:59:47.546766    4700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 14:59:47.556180    4700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 14:59:47.561017    4700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1011 14:59:47.565726    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 14:59:47.572572    4700 ssh_runner.go:195] Run: which cri-dockerd
	I1011 14:59:47.573904    4700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1011 14:59:47.576764    4700 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1011 14:59:47.581643    4700 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1011 14:59:47.672564    4700 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1011 14:59:47.761758    4700 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1011 14:59:47.761826    4700 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1011 14:59:47.769419    4700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 14:59:47.859818    4700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1011 14:59:50.038706    4700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.178927375s)
	I1011 14:59:50.038774    4700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1011 14:59:50.043747    4700 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1011 14:59:50.050547    4700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1011 14:59:50.055923    4700 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1011 14:59:50.127745    4700 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1011 14:59:50.215080    4700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 14:59:50.292477    4700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1011 14:59:50.298434    4700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1011 14:59:50.303442    4700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 14:59:50.382848    4700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1011 14:59:50.422639    4700 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1011 14:59:50.422728    4700 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1011 14:59:50.424723    4700 start.go:563] Will wait 60s for crictl version
	I1011 14:59:50.424763    4700 ssh_runner.go:195] Run: which crictl
	I1011 14:59:50.426163    4700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 14:59:50.438801    4700 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1011 14:59:50.438900    4700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1011 14:59:50.451452    4700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1011 14:59:50.471874    4700 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1011 14:59:50.471979    4700 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1011 14:59:50.473372    4700 kubeadm.go:883] updating cluster {Name:running-upgrade-130000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57235 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-130000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1011 14:59:50.473422    4700 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1011 14:59:50.473469    4700 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1011 14:59:50.488072    4700 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1011 14:59:50.488087    4700 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1011 14:59:50.488147    4700 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1011 14:59:50.491007    4700 ssh_runner.go:195] Run: which lz4
	I1011 14:59:50.492353    4700 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 14:59:50.493564    4700 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 14:59:50.493574    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1011 14:59:51.494285    4700 docker.go:653] duration metric: took 1.001993209s to copy over tarball
	I1011 14:59:51.494370    4700 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 14:59:52.606284    4700 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.111927958s)
	I1011 14:59:52.606298    4700 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 14:59:52.622502    4700 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1011 14:59:52.625963    4700 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1011 14:59:52.631024    4700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 14:59:52.717813    4700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1011 14:59:53.929196    4700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.211395375s)
	I1011 14:59:53.929290    4700 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1011 14:59:53.945271    4700 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1011 14:59:53.945282    4700 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1011 14:59:53.945287    4700 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 14:59:53.949907    4700 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 14:59:53.957528    4700 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1011 14:59:53.958943    4700 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1011 14:59:53.959210    4700 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1011 14:59:53.959347    4700 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 14:59:53.961609    4700 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1011 14:59:53.961605    4700 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1011 14:59:53.962786    4700 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1011 14:59:53.962994    4700 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1011 14:59:53.963246    4700 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1011 14:59:53.964344    4700 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1011 14:59:53.964356    4700 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1011 14:59:53.965471    4700 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1011 14:59:53.965643    4700 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1011 14:59:53.966715    4700 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1011 14:59:53.967340    4700 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1011 14:59:54.480383    4700 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1011 14:59:54.484644    4700 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1011 14:59:54.492705    4700 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1011 14:59:54.492739    4700 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1011 14:59:54.492797    4700 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1011 14:59:54.503888    4700 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1011 14:59:54.503913    4700 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1011 14:59:54.503974    4700 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1011 14:59:54.505518    4700 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1011 14:59:54.505578    4700 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1011 14:59:54.523388    4700 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1011 14:59:54.523922    4700 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1011 14:59:54.523944    4700 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1011 14:59:54.523992    4700 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1011 14:59:54.536346    4700 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1011 14:59:54.579736    4700 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1011 14:59:54.590907    4700 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1011 14:59:54.590929    4700 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1011 14:59:54.590986    4700 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1011 14:59:54.597831    4700 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1011 14:59:54.607575    4700 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1011 14:59:54.607721    4700 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1011 14:59:54.609736    4700 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1011 14:59:54.609752    4700 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1011 14:59:54.609800    4700 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1011 14:59:54.610558    4700 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1011 14:59:54.610571    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1011 14:59:54.652677    4700 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1011 14:59:54.652831    4700 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1011 14:59:54.666707    4700 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1011 14:59:54.666742    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1011 14:59:54.689637    4700 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1011 14:59:54.689654    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1011 14:59:54.718992    4700 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	W1011 14:59:54.740033    4700 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1011 14:59:54.740185    4700 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1011 14:59:54.755830    4700 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1011 14:59:54.768416    4700 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1011 14:59:54.768437    4700 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1011 14:59:54.768509    4700 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1011 14:59:54.822386    4700 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1011 14:59:54.822392    4700 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1011 14:59:54.822451    4700 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1011 14:59:54.822506    4700 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1011 14:59:54.860290    4700 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1011 14:59:54.860443    4700 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1011 14:59:54.871920    4700 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1011 14:59:54.871947    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1011 14:59:54.890937    4700 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1011 14:59:54.890949    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W1011 14:59:54.893110    4700 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1011 14:59:54.893221    4700 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 14:59:55.064501    4700 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1011 14:59:55.064527    4700 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1011 14:59:55.064532    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1011 14:59:55.064535    4700 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1011 14:59:55.064554    4700 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 14:59:55.064615    4700 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 14:59:55.804022    4700 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1011 14:59:55.804078    4700 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1011 14:59:55.804537    4700 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1011 14:59:55.809180    4700 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1011 14:59:55.809222    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1011 14:59:55.868352    4700 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1011 14:59:55.868365    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1011 14:59:56.167125    4700 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1011 14:59:56.167166    4700 cache_images.go:92] duration metric: took 2.221920417s to LoadCachedImages
	W1011 14:59:56.167201    4700 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1011 14:59:56.167207    4700 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1011 14:59:56.167265    4700 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-130000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-130000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 14:59:56.167339    4700 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1011 14:59:56.228918    4700 cni.go:84] Creating CNI manager for ""
	I1011 14:59:56.228934    4700 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 14:59:56.228940    4700 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 14:59:56.228948    4700 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-130000 NodeName:running-upgrade-130000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 14:59:56.229022    4700 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-130000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 14:59:56.229090    4700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1011 14:59:56.235257    4700 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 14:59:56.235318    4700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 14:59:56.244032    4700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1011 14:59:56.258614    4700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 14:59:56.275866    4700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1011 14:59:56.295220    4700 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1011 14:59:56.297037    4700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 14:59:56.437904    4700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 14:59:56.443463    4700 certs.go:68] Setting up /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000 for IP: 10.0.2.15
	I1011 14:59:56.443475    4700 certs.go:194] generating shared ca certs ...
	I1011 14:59:56.443484    4700 certs.go:226] acquiring lock for ca certs: {Name:mk35edffff951ee63400693cabf88751b6257cd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 14:59:56.443660    4700 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.key
	I1011 14:59:56.443695    4700 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/proxy-client-ca.key
	I1011 14:59:56.443701    4700 certs.go:256] generating profile certs ...
	I1011 14:59:56.443770    4700 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/client.key
	I1011 14:59:56.443786    4700 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/apiserver.key.c0b51eea
	I1011 14:59:56.443801    4700 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/apiserver.crt.c0b51eea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1011 14:59:56.558360    4700 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/apiserver.crt.c0b51eea ...
	I1011 14:59:56.558376    4700 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/apiserver.crt.c0b51eea: {Name:mkb3e354cc206737d91b3a0c44541b25ee750043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 14:59:56.558705    4700 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/apiserver.key.c0b51eea ...
	I1011 14:59:56.558711    4700 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/apiserver.key.c0b51eea: {Name:mk3a30904050e22ee16651f05d06df7619d89923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 14:59:56.558866    4700 certs.go:381] copying /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/apiserver.crt.c0b51eea -> /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/apiserver.crt
	I1011 14:59:56.558985    4700 certs.go:385] copying /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/apiserver.key.c0b51eea -> /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/apiserver.key
	I1011 14:59:56.559113    4700 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/proxy-client.key
	I1011 14:59:56.559245    4700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/1707.pem (1338 bytes)
	W1011 14:59:56.559271    4700 certs.go:480] ignoring /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/1707_empty.pem, impossibly tiny 0 bytes
	I1011 14:59:56.559276    4700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca-key.pem (1679 bytes)
	I1011 14:59:56.559297    4700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem (1078 bytes)
	I1011 14:59:56.559315    4700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem (1123 bytes)
	I1011 14:59:56.559333    4700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/key.pem (1675 bytes)
	I1011 14:59:56.559369    4700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/files/etc/ssl/certs/17072.pem (1708 bytes)
	I1011 14:59:56.559710    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 14:59:56.569529    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 14:59:56.585665    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 14:59:56.597534    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 14:59:56.609642    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1011 14:59:56.626410    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 14:59:56.657495    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 14:59:56.667965    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 14:59:56.678546    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/1707.pem --> /usr/share/ca-certificates/1707.pem (1338 bytes)
	I1011 14:59:56.693522    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/files/etc/ssl/certs/17072.pem --> /usr/share/ca-certificates/17072.pem (1708 bytes)
	I1011 14:59:56.708395    4700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 14:59:56.717095    4700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 14:59:56.733244    4700 ssh_runner.go:195] Run: openssl version
	I1011 14:59:56.739200    4700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17072.pem && ln -fs /usr/share/ca-certificates/17072.pem /etc/ssl/certs/17072.pem"
	I1011 14:59:56.742446    4700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17072.pem
	I1011 14:59:56.746545    4700 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:05 /usr/share/ca-certificates/17072.pem
	I1011 14:59:56.746591    4700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17072.pem
	I1011 14:59:56.755879    4700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17072.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 14:59:56.766185    4700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 14:59:56.774307    4700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 14:59:56.777973    4700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I1011 14:59:56.778008    4700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 14:59:56.781356    4700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 14:59:56.799240    4700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1707.pem && ln -fs /usr/share/ca-certificates/1707.pem /etc/ssl/certs/1707.pem"
	I1011 14:59:56.822042    4700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1707.pem
	I1011 14:59:56.832838    4700 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:05 /usr/share/ca-certificates/1707.pem
	I1011 14:59:56.832910    4700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1707.pem
	I1011 14:59:56.840228    4700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1707.pem /etc/ssl/certs/51391683.0"
	I1011 14:59:56.847552    4700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 14:59:56.850721    4700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 14:59:56.861573    4700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 14:59:56.863362    4700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 14:59:56.865368    4700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 14:59:56.885628    4700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 14:59:56.887586    4700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 14:59:56.897056    4700 kubeadm.go:392] StartCluster: {Name:running-upgrade-130000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57235 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-130000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1011 14:59:56.897131    4700 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1011 14:59:56.920271    4700 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 14:59:56.927513    4700 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 14:59:56.927519    4700 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 14:59:56.927548    4700 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 14:59:56.930331    4700 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 14:59:56.930582    4700 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-130000" does not appear in /Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 14:59:56.930633    4700 kubeconfig.go:62] /Users/jenkins/minikube-integration/19749-1186/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-130000" cluster setting kubeconfig missing "running-upgrade-130000" context setting]
	I1011 14:59:56.930767    4700 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/kubeconfig: {Name:mkc848521291f94f61a80272f8eb43a8779805e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 14:59:56.931486    4700 kapi.go:59] client config for running-upgrade-130000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/client.key", CAFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104662e40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1011 14:59:56.931830    4700 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 14:59:56.934660    4700 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-130000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1011 14:59:56.934666    4700 kubeadm.go:1160] stopping kube-system containers ...
	I1011 14:59:56.934716    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1011 14:59:56.963518    4700 docker.go:483] Stopping containers: [8eff891e4c56 3e8ced358756 dc72a658b8c9 ddb08b4b5869 596c8239d6cf 5aceda2abdb5 a50040ff51db 5d5ef892813a 5cfa7c2ea4c1 cc95eac31e92 246de156eb62 83d54e5c1054 a7b402fd3fdb 005a5b09e6f6 5abe31329205 74e387d47d30 b65a2f69566c 462cc3aa4415 bd440579b25f 8835935c0687]
	I1011 14:59:56.963605    4700 ssh_runner.go:195] Run: docker stop 8eff891e4c56 3e8ced358756 dc72a658b8c9 ddb08b4b5869 596c8239d6cf 5aceda2abdb5 a50040ff51db 5d5ef892813a 5cfa7c2ea4c1 cc95eac31e92 246de156eb62 83d54e5c1054 a7b402fd3fdb 005a5b09e6f6 5abe31329205 74e387d47d30 b65a2f69566c 462cc3aa4415 bd440579b25f 8835935c0687
	I1011 14:59:58.168030    4700 ssh_runner.go:235] Completed: docker stop 8eff891e4c56 3e8ced358756 dc72a658b8c9 ddb08b4b5869 596c8239d6cf 5aceda2abdb5 a50040ff51db 5d5ef892813a 5cfa7c2ea4c1 cc95eac31e92 246de156eb62 83d54e5c1054 a7b402fd3fdb 005a5b09e6f6 5abe31329205 74e387d47d30 b65a2f69566c 462cc3aa4415 bd440579b25f 8835935c0687: (1.204434584s)
	I1011 14:59:58.168117    4700 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 14:59:58.257035    4700 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 14:59:58.260389    4700 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Oct 11 21:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Oct 11 21:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct 11 21:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Oct 11 21:59 /etc/kubernetes/scheduler.conf
	
	I1011 14:59:58.260431    4700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/admin.conf
	I1011 14:59:58.263442    4700 kubeadm.go:163] "https://control-plane.minikube.internal:57235" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1011 14:59:58.263477    4700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 14:59:58.266224    4700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/kubelet.conf
	I1011 14:59:58.268848    4700 kubeadm.go:163] "https://control-plane.minikube.internal:57235" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1011 14:59:58.268879    4700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 14:59:58.271839    4700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/controller-manager.conf
	I1011 14:59:58.276550    4700 kubeadm.go:163] "https://control-plane.minikube.internal:57235" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1011 14:59:58.276585    4700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 14:59:58.280676    4700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/scheduler.conf
	I1011 14:59:58.283558    4700 kubeadm.go:163] "https://control-plane.minikube.internal:57235" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1011 14:59:58.283613    4700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 14:59:58.288499    4700 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 14:59:58.291212    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 14:59:58.311670    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 14:59:58.802601    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 14:59:59.009484    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 14:59:59.038728    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 14:59:59.068433    4700 api_server.go:52] waiting for apiserver process to appear ...
	I1011 14:59:59.068512    4700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 14:59:59.570920    4700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 15:00:00.070584    4700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 15:00:00.570564    4700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 15:00:00.574820    4700 api_server.go:72] duration metric: took 1.506418291s to wait for apiserver process to appear ...
	I1011 15:00:00.574830    4700 api_server.go:88] waiting for apiserver healthz status ...
	I1011 15:00:00.574845    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:00:05.576985    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:00:05.577064    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:00:10.577739    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:00:10.577801    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:00:15.578467    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:00:15.578496    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:00:20.579156    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:00:20.579341    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:00:25.580732    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:00:25.580803    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:00:30.582632    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:00:30.582734    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:00:35.585188    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:00:35.585283    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:00:40.587999    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:00:40.588095    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:00:45.590958    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:00:45.591040    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:00:50.593581    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:00:50.593682    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:00:55.595640    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:00:55.595736    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:01:00.598496    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:01:00.598985    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:01:00.640170    4700 logs.go:282] 2 containers: [24f46358727d dc72a658b8c9]
	I1011 15:01:00.640329    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:01:00.662418    4700 logs.go:282] 2 containers: [9f0e46648c4a ddb08b4b5869]
	I1011 15:01:00.662552    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:01:00.678498    4700 logs.go:282] 1 containers: [6105a62dc060]
	I1011 15:01:00.678568    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:01:00.691095    4700 logs.go:282] 2 containers: [92f60d23dbb0 3e8ced358756]
	I1011 15:01:00.691174    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:01:00.703591    4700 logs.go:282] 1 containers: [3da92cc90a0f]
	I1011 15:01:00.703656    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:01:00.713927    4700 logs.go:282] 2 containers: [ab10164156ed 8eff891e4c56]
	I1011 15:01:00.714002    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:01:00.724853    4700 logs.go:282] 0 containers: []
	W1011 15:01:00.724863    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:01:00.724923    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:01:00.734276    4700 logs.go:282] 0 containers: []
	W1011 15:01:00.734290    4700 logs.go:284] No container was found matching "storage-provisioner"
	I1011 15:01:00.734303    4700 logs.go:123] Gathering logs for kube-scheduler [92f60d23dbb0] ...
	I1011 15:01:00.734308    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f60d23dbb0"
	I1011 15:01:00.746226    4700 logs.go:123] Gathering logs for kube-controller-manager [ab10164156ed] ...
	I1011 15:01:00.746238    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab10164156ed"
	I1011 15:01:00.763771    4700 logs.go:123] Gathering logs for kube-apiserver [24f46358727d] ...
	I1011 15:01:00.763783    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f46358727d"
	I1011 15:01:00.778570    4700 logs.go:123] Gathering logs for etcd [9f0e46648c4a] ...
	I1011 15:01:00.778584    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0e46648c4a"
	I1011 15:01:00.796769    4700 logs.go:123] Gathering logs for coredns [6105a62dc060] ...
	I1011 15:01:00.796782    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6105a62dc060"
	I1011 15:01:00.807955    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:01:00.807966    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:01:00.844781    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:01:00.844874    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:01:00.845384    4700 logs.go:123] Gathering logs for kube-apiserver [dc72a658b8c9] ...
	I1011 15:01:00.845389    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc72a658b8c9"
	I1011 15:01:00.857272    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:01:00.857285    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:01:00.868955    4700 logs.go:123] Gathering logs for etcd [ddb08b4b5869] ...
	I1011 15:01:00.868968    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddb08b4b5869"
	I1011 15:01:00.883159    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:01:00.883169    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:01:00.908229    4700 logs.go:123] Gathering logs for kube-proxy [3da92cc90a0f] ...
	I1011 15:01:00.908236    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3da92cc90a0f"
	I1011 15:01:00.922554    4700 logs.go:123] Gathering logs for kube-controller-manager [8eff891e4c56] ...
	I1011 15:01:00.922565    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff891e4c56"
	I1011 15:01:00.934273    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:01:00.934286    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:01:00.939126    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:01:00.939135    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:01:01.012794    4700 logs.go:123] Gathering logs for kube-scheduler [3e8ced358756] ...
	I1011 15:01:01.012805    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8ced358756"
	I1011 15:01:01.024181    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:01:01.024191    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:01:01.024217    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:01:01.024222    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:01:01.024231    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:01:01.024239    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:01:01.024241    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:01:11.028390    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:01:16.031227    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:01:16.031767    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:01:16.070545    4700 logs.go:282] 2 containers: [24f46358727d dc72a658b8c9]
	I1011 15:01:16.070712    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:01:16.091949    4700 logs.go:282] 2 containers: [9f0e46648c4a ddb08b4b5869]
	I1011 15:01:16.092072    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:01:16.107428    4700 logs.go:282] 1 containers: [6105a62dc060]
	I1011 15:01:16.107521    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:01:16.121752    4700 logs.go:282] 2 containers: [92f60d23dbb0 3e8ced358756]
	I1011 15:01:16.121845    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:01:16.132808    4700 logs.go:282] 1 containers: [3da92cc90a0f]
	I1011 15:01:16.132884    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:01:16.143391    4700 logs.go:282] 2 containers: [ab10164156ed 8eff891e4c56]
	I1011 15:01:16.143465    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:01:16.158240    4700 logs.go:282] 0 containers: []
	W1011 15:01:16.158249    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:01:16.158305    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:01:16.168512    4700 logs.go:282] 0 containers: []
	W1011 15:01:16.168520    4700 logs.go:284] No container was found matching "storage-provisioner"
	I1011 15:01:16.168538    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:01:16.168543    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:01:16.193982    4700 logs.go:123] Gathering logs for kube-apiserver [24f46358727d] ...
	I1011 15:01:16.193994    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f46358727d"
	I1011 15:01:16.208312    4700 logs.go:123] Gathering logs for kube-apiserver [dc72a658b8c9] ...
	I1011 15:01:16.208322    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc72a658b8c9"
	I1011 15:01:16.219901    4700 logs.go:123] Gathering logs for etcd [9f0e46648c4a] ...
	I1011 15:01:16.219916    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0e46648c4a"
	I1011 15:01:16.234506    4700 logs.go:123] Gathering logs for kube-scheduler [92f60d23dbb0] ...
	I1011 15:01:16.234516    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f60d23dbb0"
	I1011 15:01:16.246208    4700 logs.go:123] Gathering logs for kube-controller-manager [ab10164156ed] ...
	I1011 15:01:16.246225    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab10164156ed"
	I1011 15:01:16.264306    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:01:16.264316    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:01:16.299331    4700 logs.go:123] Gathering logs for kube-controller-manager [8eff891e4c56] ...
	I1011 15:01:16.299342    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff891e4c56"
	I1011 15:01:16.310391    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:01:16.310406    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:01:16.345885    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:01:16.345979    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:01:16.346454    4700 logs.go:123] Gathering logs for etcd [ddb08b4b5869] ...
	I1011 15:01:16.346459    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddb08b4b5869"
	I1011 15:01:16.363530    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:01:16.363539    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:01:16.374897    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:01:16.374909    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:01:16.379551    4700 logs.go:123] Gathering logs for coredns [6105a62dc060] ...
	I1011 15:01:16.379557    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6105a62dc060"
	I1011 15:01:16.390588    4700 logs.go:123] Gathering logs for kube-scheduler [3e8ced358756] ...
	I1011 15:01:16.390599    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8ced358756"
	I1011 15:01:16.404410    4700 logs.go:123] Gathering logs for kube-proxy [3da92cc90a0f] ...
	I1011 15:01:16.404422    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3da92cc90a0f"
	I1011 15:01:16.420036    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:01:16.420045    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:01:16.420072    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:01:16.420077    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:01:16.420081    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:01:16.420085    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:01:16.420088    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:01:26.424130    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:01:31.427290    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:01:31.427835    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:01:31.467428    4700 logs.go:282] 2 containers: [24f46358727d dc72a658b8c9]
	I1011 15:01:31.467597    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:01:31.496772    4700 logs.go:282] 2 containers: [9f0e46648c4a ddb08b4b5869]
	I1011 15:01:31.496877    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:01:31.510704    4700 logs.go:282] 1 containers: [6105a62dc060]
	I1011 15:01:31.510798    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:01:31.525524    4700 logs.go:282] 2 containers: [92f60d23dbb0 3e8ced358756]
	I1011 15:01:31.525612    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:01:31.536316    4700 logs.go:282] 1 containers: [3da92cc90a0f]
	I1011 15:01:31.536390    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:01:31.547179    4700 logs.go:282] 2 containers: [ab10164156ed 8eff891e4c56]
	I1011 15:01:31.547252    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:01:31.557972    4700 logs.go:282] 0 containers: []
	W1011 15:01:31.557982    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:01:31.558055    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:01:31.568513    4700 logs.go:282] 0 containers: []
	W1011 15:01:31.568522    4700 logs.go:284] No container was found matching "storage-provisioner"
	I1011 15:01:31.568531    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:01:31.568536    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:01:31.594397    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:01:31.594407    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:01:31.599069    4700 logs.go:123] Gathering logs for etcd [ddb08b4b5869] ...
	I1011 15:01:31.599077    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddb08b4b5869"
	I1011 15:01:31.612864    4700 logs.go:123] Gathering logs for kube-scheduler [3e8ced358756] ...
	I1011 15:01:31.612876    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8ced358756"
	I1011 15:01:31.632112    4700 logs.go:123] Gathering logs for kube-proxy [3da92cc90a0f] ...
	I1011 15:01:31.632124    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3da92cc90a0f"
	I1011 15:01:31.644935    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:01:31.644962    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:01:31.681656    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:01:31.681747    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:01:31.682204    4700 logs.go:123] Gathering logs for kube-apiserver [dc72a658b8c9] ...
	I1011 15:01:31.682208    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc72a658b8c9"
	I1011 15:01:31.693877    4700 logs.go:123] Gathering logs for coredns [6105a62dc060] ...
	I1011 15:01:31.693890    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6105a62dc060"
	I1011 15:01:31.706150    4700 logs.go:123] Gathering logs for kube-apiserver [24f46358727d] ...
	I1011 15:01:31.706163    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f46358727d"
	I1011 15:01:31.720433    4700 logs.go:123] Gathering logs for kube-controller-manager [ab10164156ed] ...
	I1011 15:01:31.720446    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab10164156ed"
	I1011 15:01:31.738104    4700 logs.go:123] Gathering logs for kube-controller-manager [8eff891e4c56] ...
	I1011 15:01:31.738117    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff891e4c56"
	I1011 15:01:31.749256    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:01:31.749267    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:01:31.761248    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:01:31.761259    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:01:31.796175    4700 logs.go:123] Gathering logs for etcd [9f0e46648c4a] ...
	I1011 15:01:31.796187    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0e46648c4a"
	I1011 15:01:31.810504    4700 logs.go:123] Gathering logs for kube-scheduler [92f60d23dbb0] ...
	I1011 15:01:31.810518    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f60d23dbb0"
	I1011 15:01:31.825414    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:01:31.825428    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:01:31.825452    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:01:31.825457    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:01:31.825461    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:01:31.825465    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:01:31.825468    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:01:41.829506    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:01:46.832373    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:01:46.832925    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:01:46.871727    4700 logs.go:282] 2 containers: [24f46358727d dc72a658b8c9]
	I1011 15:01:46.871841    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:01:46.891950    4700 logs.go:282] 2 containers: [9f0e46648c4a ddb08b4b5869]
	I1011 15:01:46.892042    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:01:46.907089    4700 logs.go:282] 1 containers: [6105a62dc060]
	I1011 15:01:46.907178    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:01:46.920453    4700 logs.go:282] 2 containers: [92f60d23dbb0 3e8ced358756]
	I1011 15:01:46.920539    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:01:46.931282    4700 logs.go:282] 1 containers: [3da92cc90a0f]
	I1011 15:01:46.931355    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:01:46.948278    4700 logs.go:282] 2 containers: [ab10164156ed 8eff891e4c56]
	I1011 15:01:46.948351    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:01:46.958561    4700 logs.go:282] 0 containers: []
	W1011 15:01:46.958573    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:01:46.958631    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:01:46.972401    4700 logs.go:282] 0 containers: []
	W1011 15:01:46.972415    4700 logs.go:284] No container was found matching "storage-provisioner"
	I1011 15:01:46.972424    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:01:46.972430    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:01:46.984419    4700 logs.go:123] Gathering logs for etcd [9f0e46648c4a] ...
	I1011 15:01:46.984435    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0e46648c4a"
	I1011 15:01:46.998686    4700 logs.go:123] Gathering logs for etcd [ddb08b4b5869] ...
	I1011 15:01:46.998697    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddb08b4b5869"
	I1011 15:01:47.014813    4700 logs.go:123] Gathering logs for coredns [6105a62dc060] ...
	I1011 15:01:47.014824    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6105a62dc060"
	I1011 15:01:47.026572    4700 logs.go:123] Gathering logs for kube-scheduler [3e8ced358756] ...
	I1011 15:01:47.026584    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8ced358756"
	I1011 15:01:47.037620    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:01:47.037631    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:01:47.042543    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:01:47.042549    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:01:47.076523    4700 logs.go:123] Gathering logs for kube-apiserver [24f46358727d] ...
	I1011 15:01:47.076535    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f46358727d"
	I1011 15:01:47.090389    4700 logs.go:123] Gathering logs for kube-apiserver [dc72a658b8c9] ...
	I1011 15:01:47.090400    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc72a658b8c9"
	I1011 15:01:47.102701    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:01:47.102714    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:01:47.139521    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:01:47.139613    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:01:47.140117    4700 logs.go:123] Gathering logs for kube-scheduler [92f60d23dbb0] ...
	I1011 15:01:47.140125    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f60d23dbb0"
	I1011 15:01:47.151696    4700 logs.go:123] Gathering logs for kube-proxy [3da92cc90a0f] ...
	I1011 15:01:47.151706    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3da92cc90a0f"
	I1011 15:01:47.163092    4700 logs.go:123] Gathering logs for kube-controller-manager [ab10164156ed] ...
	I1011 15:01:47.163103    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab10164156ed"
	I1011 15:01:47.180526    4700 logs.go:123] Gathering logs for kube-controller-manager [8eff891e4c56] ...
	I1011 15:01:47.180538    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff891e4c56"
	I1011 15:01:47.191615    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:01:47.191627    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:01:47.217853    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:01:47.217861    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:01:47.217894    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:01:47.217899    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:01:47.217903    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:01:47.217907    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:01:47.217910    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:01:57.222023    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:02:02.223426    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:02:02.223935    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:02:02.262488    4700 logs.go:282] 2 containers: [24f46358727d dc72a658b8c9]
	I1011 15:02:02.262629    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:02:02.283015    4700 logs.go:282] 2 containers: [9f0e46648c4a ddb08b4b5869]
	I1011 15:02:02.283130    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:02:02.297137    4700 logs.go:282] 1 containers: [6105a62dc060]
	I1011 15:02:02.297221    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:02:02.309377    4700 logs.go:282] 2 containers: [92f60d23dbb0 3e8ced358756]
	I1011 15:02:02.309466    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:02:02.320188    4700 logs.go:282] 1 containers: [3da92cc90a0f]
	I1011 15:02:02.320261    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:02:02.330904    4700 logs.go:282] 2 containers: [ab10164156ed 8eff891e4c56]
	I1011 15:02:02.330986    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:02:02.342615    4700 logs.go:282] 0 containers: []
	W1011 15:02:02.342625    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:02:02.342689    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:02:02.353060    4700 logs.go:282] 0 containers: []
	W1011 15:02:02.353072    4700 logs.go:284] No container was found matching "storage-provisioner"
	I1011 15:02:02.353079    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:02:02.353085    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:02:02.387471    4700 logs.go:123] Gathering logs for kube-apiserver [24f46358727d] ...
	I1011 15:02:02.387481    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f46358727d"
	I1011 15:02:02.402598    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:02:02.402611    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:02:02.428973    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:02:02.428982    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:02:02.441981    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:02:02.441992    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:02:02.446878    4700 logs.go:123] Gathering logs for kube-scheduler [92f60d23dbb0] ...
	I1011 15:02:02.446883    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f60d23dbb0"
	I1011 15:02:02.458729    4700 logs.go:123] Gathering logs for kube-proxy [3da92cc90a0f] ...
	I1011 15:02:02.458739    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3da92cc90a0f"
	I1011 15:02:02.471204    4700 logs.go:123] Gathering logs for kube-controller-manager [8eff891e4c56] ...
	I1011 15:02:02.471214    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff891e4c56"
	I1011 15:02:02.482403    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:02:02.482416    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:02:02.520438    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:02:02.520534    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:02:02.521003    4700 logs.go:123] Gathering logs for etcd [9f0e46648c4a] ...
	I1011 15:02:02.521009    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0e46648c4a"
	I1011 15:02:02.535243    4700 logs.go:123] Gathering logs for coredns [6105a62dc060] ...
	I1011 15:02:02.535257    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6105a62dc060"
	I1011 15:02:02.551824    4700 logs.go:123] Gathering logs for kube-scheduler [3e8ced358756] ...
	I1011 15:02:02.551836    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8ced358756"
	I1011 15:02:02.563941    4700 logs.go:123] Gathering logs for kube-apiserver [dc72a658b8c9] ...
	I1011 15:02:02.563952    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc72a658b8c9"
	I1011 15:02:02.580326    4700 logs.go:123] Gathering logs for etcd [ddb08b4b5869] ...
	I1011 15:02:02.580339    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddb08b4b5869"
	I1011 15:02:02.594404    4700 logs.go:123] Gathering logs for kube-controller-manager [ab10164156ed] ...
	I1011 15:02:02.594413    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab10164156ed"
	I1011 15:02:02.612772    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:02:02.612785    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:02:02.612808    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:02:02.612813    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:02:02.612818    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:02:02.612824    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:02:02.612827    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:02:12.616311    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:02:17.618528    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:02:17.618725    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:02:17.634179    4700 logs.go:282] 2 containers: [24f46358727d dc72a658b8c9]
	I1011 15:02:17.634301    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:02:17.646434    4700 logs.go:282] 2 containers: [9f0e46648c4a ddb08b4b5869]
	I1011 15:02:17.646518    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:02:17.657270    4700 logs.go:282] 1 containers: [6105a62dc060]
	I1011 15:02:17.657354    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:02:17.668801    4700 logs.go:282] 2 containers: [92f60d23dbb0 3e8ced358756]
	I1011 15:02:17.668886    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:02:17.679825    4700 logs.go:282] 1 containers: [3da92cc90a0f]
	I1011 15:02:17.679903    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:02:17.691915    4700 logs.go:282] 2 containers: [ab10164156ed 8eff891e4c56]
	I1011 15:02:17.691989    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:02:17.702883    4700 logs.go:282] 0 containers: []
	W1011 15:02:17.702896    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:02:17.702964    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:02:17.713424    4700 logs.go:282] 0 containers: []
	W1011 15:02:17.713436    4700 logs.go:284] No container was found matching "storage-provisioner"
	I1011 15:02:17.713443    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:02:17.713449    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:02:17.740871    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:02:17.740880    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:02:17.753049    4700 logs.go:123] Gathering logs for kube-scheduler [92f60d23dbb0] ...
	I1011 15:02:17.753062    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f60d23dbb0"
	I1011 15:02:17.770243    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:02:17.770254    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:02:17.809476    4700 logs.go:123] Gathering logs for kube-apiserver [24f46358727d] ...
	I1011 15:02:17.809490    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f46358727d"
	I1011 15:02:17.824856    4700 logs.go:123] Gathering logs for kube-apiserver [dc72a658b8c9] ...
	I1011 15:02:17.824870    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc72a658b8c9"
	I1011 15:02:17.839069    4700 logs.go:123] Gathering logs for etcd [9f0e46648c4a] ...
	I1011 15:02:17.839082    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0e46648c4a"
	I1011 15:02:17.853864    4700 logs.go:123] Gathering logs for coredns [6105a62dc060] ...
	I1011 15:02:17.853876    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6105a62dc060"
	I1011 15:02:17.865949    4700 logs.go:123] Gathering logs for kube-controller-manager [8eff891e4c56] ...
	I1011 15:02:17.865962    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff891e4c56"
	I1011 15:02:17.879243    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:02:17.879259    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:02:17.916811    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:02:17.916907    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:02:17.917395    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:02:17.917400    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:02:17.921640    4700 logs.go:123] Gathering logs for etcd [ddb08b4b5869] ...
	I1011 15:02:17.921646    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddb08b4b5869"
	I1011 15:02:17.939150    4700 logs.go:123] Gathering logs for kube-scheduler [3e8ced358756] ...
	I1011 15:02:17.939162    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8ced358756"
	I1011 15:02:17.950946    4700 logs.go:123] Gathering logs for kube-proxy [3da92cc90a0f] ...
	I1011 15:02:17.950957    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3da92cc90a0f"
	I1011 15:02:17.962633    4700 logs.go:123] Gathering logs for kube-controller-manager [ab10164156ed] ...
	I1011 15:02:17.962645    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab10164156ed"
	I1011 15:02:17.980545    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:02:17.980556    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:02:17.980580    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:02:17.980584    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:02:17.980587    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:02:17.980590    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:02:17.980593    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:02:27.983427    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:02:32.987189    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:02:32.987316    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:02:32.999953    4700 logs.go:282] 2 containers: [24f46358727d dc72a658b8c9]
	I1011 15:02:33.000045    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:02:33.011437    4700 logs.go:282] 2 containers: [9f0e46648c4a ddb08b4b5869]
	I1011 15:02:33.011521    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:02:33.022335    4700 logs.go:282] 1 containers: [6105a62dc060]
	I1011 15:02:33.022415    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:02:33.033346    4700 logs.go:282] 2 containers: [92f60d23dbb0 3e8ced358756]
	I1011 15:02:33.033430    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:02:33.044516    4700 logs.go:282] 1 containers: [3da92cc90a0f]
	I1011 15:02:33.044602    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:02:33.060481    4700 logs.go:282] 2 containers: [ab10164156ed 8eff891e4c56]
	I1011 15:02:33.060566    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:02:33.076307    4700 logs.go:282] 0 containers: []
	W1011 15:02:33.076319    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:02:33.076390    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:02:33.088615    4700 logs.go:282] 0 containers: []
	W1011 15:02:33.088626    4700 logs.go:284] No container was found matching "storage-provisioner"
	I1011 15:02:33.088634    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:02:33.088640    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:02:33.125693    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:02:33.125791    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:02:33.126272    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:02:33.126279    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:02:33.162101    4700 logs.go:123] Gathering logs for kube-apiserver [24f46358727d] ...
	I1011 15:02:33.162113    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f46358727d"
	I1011 15:02:33.185972    4700 logs.go:123] Gathering logs for kube-apiserver [dc72a658b8c9] ...
	I1011 15:02:33.185982    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc72a658b8c9"
	I1011 15:02:33.198007    4700 logs.go:123] Gathering logs for etcd [ddb08b4b5869] ...
	I1011 15:02:33.198020    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddb08b4b5869"
	I1011 15:02:33.211654    4700 logs.go:123] Gathering logs for kube-scheduler [92f60d23dbb0] ...
	I1011 15:02:33.211663    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f60d23dbb0"
	I1011 15:02:33.223605    4700 logs.go:123] Gathering logs for etcd [9f0e46648c4a] ...
	I1011 15:02:33.223617    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0e46648c4a"
	I1011 15:02:33.238454    4700 logs.go:123] Gathering logs for kube-scheduler [3e8ced358756] ...
	I1011 15:02:33.238465    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8ced358756"
	I1011 15:02:33.259812    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:02:33.259825    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:02:33.285264    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:02:33.285285    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:02:33.290334    4700 logs.go:123] Gathering logs for kube-proxy [3da92cc90a0f] ...
	I1011 15:02:33.290343    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3da92cc90a0f"
	I1011 15:02:33.303479    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:02:33.303495    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:02:33.315071    4700 logs.go:123] Gathering logs for coredns [6105a62dc060] ...
	I1011 15:02:33.315086    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6105a62dc060"
	I1011 15:02:33.326936    4700 logs.go:123] Gathering logs for kube-controller-manager [ab10164156ed] ...
	I1011 15:02:33.326947    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab10164156ed"
	I1011 15:02:33.345613    4700 logs.go:123] Gathering logs for kube-controller-manager [8eff891e4c56] ...
	I1011 15:02:33.345623    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff891e4c56"
	I1011 15:02:33.357119    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:02:33.357134    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:02:33.357172    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:02:33.357178    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:02:33.357183    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:02:33.357193    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:02:33.357196    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:02:43.361156    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:02:48.363341    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:02:48.363772    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:02:48.394010    4700 logs.go:282] 2 containers: [24f46358727d dc72a658b8c9]
	I1011 15:02:48.394149    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:02:48.414037    4700 logs.go:282] 2 containers: [9f0e46648c4a ddb08b4b5869]
	I1011 15:02:48.414147    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:02:48.428347    4700 logs.go:282] 1 containers: [6105a62dc060]
	I1011 15:02:48.428429    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:02:48.440451    4700 logs.go:282] 2 containers: [92f60d23dbb0 3e8ced358756]
	I1011 15:02:48.440532    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:02:48.455639    4700 logs.go:282] 1 containers: [3da92cc90a0f]
	I1011 15:02:48.455722    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:02:48.465787    4700 logs.go:282] 2 containers: [ab10164156ed 8eff891e4c56]
	I1011 15:02:48.465861    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:02:48.476393    4700 logs.go:282] 0 containers: []
	W1011 15:02:48.476405    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:02:48.476472    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:02:48.486386    4700 logs.go:282] 0 containers: []
	W1011 15:02:48.486397    4700 logs.go:284] No container was found matching "storage-provisioner"
	I1011 15:02:48.486405    4700 logs.go:123] Gathering logs for kube-apiserver [24f46358727d] ...
	I1011 15:02:48.486411    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f46358727d"
	I1011 15:02:48.500258    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:02:48.500267    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:02:48.512118    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:02:48.512129    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:02:48.547608    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:02:48.547701    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:02:48.548181    4700 logs.go:123] Gathering logs for etcd [9f0e46648c4a] ...
	I1011 15:02:48.548188    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0e46648c4a"
	I1011 15:02:48.566284    4700 logs.go:123] Gathering logs for etcd [ddb08b4b5869] ...
	I1011 15:02:48.566294    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddb08b4b5869"
	I1011 15:02:48.579374    4700 logs.go:123] Gathering logs for coredns [6105a62dc060] ...
	I1011 15:02:48.579383    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6105a62dc060"
	I1011 15:02:48.590642    4700 logs.go:123] Gathering logs for kube-scheduler [3e8ced358756] ...
	I1011 15:02:48.590655    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8ced358756"
	I1011 15:02:48.601676    4700 logs.go:123] Gathering logs for kube-controller-manager [ab10164156ed] ...
	I1011 15:02:48.601691    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab10164156ed"
	I1011 15:02:48.618671    4700 logs.go:123] Gathering logs for kube-controller-manager [8eff891e4c56] ...
	I1011 15:02:48.618680    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff891e4c56"
	I1011 15:02:48.630042    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:02:48.630056    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:02:48.634650    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:02:48.634660    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:02:48.668922    4700 logs.go:123] Gathering logs for kube-apiserver [dc72a658b8c9] ...
	I1011 15:02:48.668932    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc72a658b8c9"
	I1011 15:02:48.681012    4700 logs.go:123] Gathering logs for kube-scheduler [92f60d23dbb0] ...
	I1011 15:02:48.681024    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f60d23dbb0"
	I1011 15:02:48.692484    4700 logs.go:123] Gathering logs for kube-proxy [3da92cc90a0f] ...
	I1011 15:02:48.692494    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3da92cc90a0f"
	I1011 15:02:48.704640    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:02:48.704650    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:02:48.728710    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:02:48.728718    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:02:48.728747    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:02:48.728752    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:02:48.728776    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:02:48.728782    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:02:48.728786    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:02:58.730903    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:03.733687    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:03.733881    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:03:03.748530    4700 logs.go:282] 2 containers: [24f46358727d dc72a658b8c9]
	I1011 15:03:03.748612    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:03:03.761623    4700 logs.go:282] 2 containers: [9f0e46648c4a ddb08b4b5869]
	I1011 15:03:03.761718    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:03:03.772795    4700 logs.go:282] 1 containers: [6105a62dc060]
	I1011 15:03:03.772873    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:03:03.783695    4700 logs.go:282] 2 containers: [92f60d23dbb0 3e8ced358756]
	I1011 15:03:03.783783    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:03:03.794542    4700 logs.go:282] 1 containers: [3da92cc90a0f]
	I1011 15:03:03.794621    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:03:03.805656    4700 logs.go:282] 2 containers: [ab10164156ed 8eff891e4c56]
	I1011 15:03:03.805742    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:03:03.816794    4700 logs.go:282] 0 containers: []
	W1011 15:03:03.816807    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:03:03.816875    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:03:03.827102    4700 logs.go:282] 0 containers: []
	W1011 15:03:03.827114    4700 logs.go:284] No container was found matching "storage-provisioner"
	I1011 15:03:03.827124    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:03:03.827133    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:03:03.865334    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:03:03.865436    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:03:03.865919    4700 logs.go:123] Gathering logs for kube-apiserver [dc72a658b8c9] ...
	I1011 15:03:03.865924    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc72a658b8c9"
	I1011 15:03:03.882058    4700 logs.go:123] Gathering logs for kube-controller-manager [8eff891e4c56] ...
	I1011 15:03:03.882068    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff891e4c56"
	I1011 15:03:03.893282    4700 logs.go:123] Gathering logs for kube-apiserver [24f46358727d] ...
	I1011 15:03:03.893295    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f46358727d"
	I1011 15:03:03.908545    4700 logs.go:123] Gathering logs for etcd [ddb08b4b5869] ...
	I1011 15:03:03.908556    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddb08b4b5869"
	I1011 15:03:03.922291    4700 logs.go:123] Gathering logs for coredns [6105a62dc060] ...
	I1011 15:03:03.922301    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6105a62dc060"
	I1011 15:03:03.933785    4700 logs.go:123] Gathering logs for kube-proxy [3da92cc90a0f] ...
	I1011 15:03:03.933796    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3da92cc90a0f"
	I1011 15:03:03.945799    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:03:03.945811    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:03:03.957858    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:03:03.957870    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:03:04.032523    4700 logs.go:123] Gathering logs for etcd [9f0e46648c4a] ...
	I1011 15:03:04.032534    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0e46648c4a"
	I1011 15:03:04.046727    4700 logs.go:123] Gathering logs for kube-controller-manager [ab10164156ed] ...
	I1011 15:03:04.046740    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab10164156ed"
	I1011 15:03:04.068864    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:03:04.068879    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:03:04.074060    4700 logs.go:123] Gathering logs for kube-scheduler [92f60d23dbb0] ...
	I1011 15:03:04.074067    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f60d23dbb0"
	I1011 15:03:04.086345    4700 logs.go:123] Gathering logs for kube-scheduler [3e8ced358756] ...
	I1011 15:03:04.086356    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8ced358756"
	I1011 15:03:04.098999    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:03:04.099011    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:03:04.124397    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:03:04.124408    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:03:04.124446    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:03:04.124452    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:03:04.124455    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:03:04.124459    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:03:04.124461    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:03:14.127968    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:19.129208    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:19.129306    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:03:19.141458    4700 logs.go:282] 2 containers: [24f46358727d dc72a658b8c9]
	I1011 15:03:19.141542    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:03:19.153397    4700 logs.go:282] 2 containers: [9f0e46648c4a ddb08b4b5869]
	I1011 15:03:19.153481    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:03:19.165555    4700 logs.go:282] 1 containers: [6105a62dc060]
	I1011 15:03:19.165643    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:03:19.177704    4700 logs.go:282] 2 containers: [92f60d23dbb0 3e8ced358756]
	I1011 15:03:19.177801    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:03:19.190042    4700 logs.go:282] 1 containers: [3da92cc90a0f]
	I1011 15:03:19.190129    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:03:19.202070    4700 logs.go:282] 2 containers: [ab10164156ed 8eff891e4c56]
	I1011 15:03:19.202150    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:03:19.213165    4700 logs.go:282] 0 containers: []
	W1011 15:03:19.213176    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:03:19.213249    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:03:19.224447    4700 logs.go:282] 0 containers: []
	W1011 15:03:19.224458    4700 logs.go:284] No container was found matching "storage-provisioner"
	I1011 15:03:19.224468    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:03:19.224475    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:03:19.229926    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:03:19.229937    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:03:19.275090    4700 logs.go:123] Gathering logs for kube-apiserver [24f46358727d] ...
	I1011 15:03:19.275101    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f46358727d"
	I1011 15:03:19.291209    4700 logs.go:123] Gathering logs for kube-controller-manager [ab10164156ed] ...
	I1011 15:03:19.291222    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab10164156ed"
	I1011 15:03:19.310681    4700 logs.go:123] Gathering logs for kube-apiserver [dc72a658b8c9] ...
	I1011 15:03:19.310692    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc72a658b8c9"
	I1011 15:03:19.326808    4700 logs.go:123] Gathering logs for kube-scheduler [3e8ced358756] ...
	I1011 15:03:19.326820    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8ced358756"
	I1011 15:03:19.339995    4700 logs.go:123] Gathering logs for kube-controller-manager [8eff891e4c56] ...
	I1011 15:03:19.340007    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff891e4c56"
	I1011 15:03:19.352756    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:03:19.352777    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:03:19.391997    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:03:19.392096    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:03:19.392585    4700 logs.go:123] Gathering logs for etcd [9f0e46648c4a] ...
	I1011 15:03:19.392591    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0e46648c4a"
	I1011 15:03:19.408227    4700 logs.go:123] Gathering logs for coredns [6105a62dc060] ...
	I1011 15:03:19.408239    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6105a62dc060"
	I1011 15:03:19.421527    4700 logs.go:123] Gathering logs for kube-proxy [3da92cc90a0f] ...
	I1011 15:03:19.421538    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3da92cc90a0f"
	I1011 15:03:19.434780    4700 logs.go:123] Gathering logs for etcd [ddb08b4b5869] ...
	I1011 15:03:19.434792    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddb08b4b5869"
	I1011 15:03:19.449860    4700 logs.go:123] Gathering logs for kube-scheduler [92f60d23dbb0] ...
	I1011 15:03:19.449872    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f60d23dbb0"
	I1011 15:03:19.462825    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:03:19.462837    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:03:19.489367    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:03:19.489380    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:03:19.502446    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:03:19.502458    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:03:19.502487    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:03:19.502492    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:03:19.502496    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:03:19.502500    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:03:19.502503    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:03:29.506470    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:34.508668    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:34.508840    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:03:34.524142    4700 logs.go:282] 2 containers: [24f46358727d dc72a658b8c9]
	I1011 15:03:34.524224    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:03:34.535140    4700 logs.go:282] 2 containers: [9f0e46648c4a ddb08b4b5869]
	I1011 15:03:34.535222    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:03:34.545235    4700 logs.go:282] 1 containers: [6105a62dc060]
	I1011 15:03:34.545314    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:03:34.557071    4700 logs.go:282] 2 containers: [92f60d23dbb0 3e8ced358756]
	I1011 15:03:34.557146    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:03:34.567402    4700 logs.go:282] 1 containers: [3da92cc90a0f]
	I1011 15:03:34.567468    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:03:34.577904    4700 logs.go:282] 2 containers: [ab10164156ed 8eff891e4c56]
	I1011 15:03:34.577985    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:03:34.588108    4700 logs.go:282] 0 containers: []
	W1011 15:03:34.588119    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:03:34.588185    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:03:34.597816    4700 logs.go:282] 0 containers: []
	W1011 15:03:34.597828    4700 logs.go:284] No container was found matching "storage-provisioner"
	I1011 15:03:34.597835    4700 logs.go:123] Gathering logs for coredns [6105a62dc060] ...
	I1011 15:03:34.597845    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6105a62dc060"
	I1011 15:03:34.611498    4700 logs.go:123] Gathering logs for kube-proxy [3da92cc90a0f] ...
	I1011 15:03:34.611510    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3da92cc90a0f"
	I1011 15:03:34.623646    4700 logs.go:123] Gathering logs for kube-apiserver [24f46358727d] ...
	I1011 15:03:34.623658    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f46358727d"
	I1011 15:03:34.642764    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:03:34.642778    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:03:34.647063    4700 logs.go:123] Gathering logs for etcd [9f0e46648c4a] ...
	I1011 15:03:34.647072    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0e46648c4a"
	I1011 15:03:34.661179    4700 logs.go:123] Gathering logs for kube-controller-manager [ab10164156ed] ...
	I1011 15:03:34.661193    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab10164156ed"
	I1011 15:03:34.679703    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:03:34.679713    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:03:34.717024    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:03:34.717122    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:03:34.717607    4700 logs.go:123] Gathering logs for kube-scheduler [3e8ced358756] ...
	I1011 15:03:34.717612    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8ced358756"
	I1011 15:03:34.731917    4700 logs.go:123] Gathering logs for kube-controller-manager [8eff891e4c56] ...
	I1011 15:03:34.731933    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff891e4c56"
	I1011 15:03:34.743962    4700 logs.go:123] Gathering logs for kube-apiserver [dc72a658b8c9] ...
	I1011 15:03:34.743976    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc72a658b8c9"
	I1011 15:03:34.756093    4700 logs.go:123] Gathering logs for etcd [ddb08b4b5869] ...
	I1011 15:03:34.756103    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddb08b4b5869"
	I1011 15:03:34.769605    4700 logs.go:123] Gathering logs for kube-scheduler [92f60d23dbb0] ...
	I1011 15:03:34.769616    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f60d23dbb0"
	I1011 15:03:34.782015    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:03:34.782026    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:03:34.806620    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:03:34.806635    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:03:34.818465    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:03:34.818475    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:03:34.852913    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:03:34.852922    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:03:34.852953    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:03:34.852959    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:03:34.852963    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:03:34.852967    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:03:34.852972    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:03:44.856951    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:49.859152    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:49.859403    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:03:49.877515    4700 logs.go:282] 2 containers: [24f46358727d dc72a658b8c9]
	I1011 15:03:49.877625    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:03:49.891366    4700 logs.go:282] 2 containers: [9f0e46648c4a ddb08b4b5869]
	I1011 15:03:49.891444    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:03:49.903097    4700 logs.go:282] 1 containers: [6105a62dc060]
	I1011 15:03:49.903179    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:03:49.913552    4700 logs.go:282] 2 containers: [92f60d23dbb0 3e8ced358756]
	I1011 15:03:49.913622    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:03:49.923985    4700 logs.go:282] 1 containers: [3da92cc90a0f]
	I1011 15:03:49.924060    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:03:49.934341    4700 logs.go:282] 2 containers: [ab10164156ed 8eff891e4c56]
	I1011 15:03:49.934425    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:03:49.944382    4700 logs.go:282] 0 containers: []
	W1011 15:03:49.944397    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:03:49.944458    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:03:49.955678    4700 logs.go:282] 0 containers: []
	W1011 15:03:49.955689    4700 logs.go:284] No container was found matching "storage-provisioner"
	I1011 15:03:49.955699    4700 logs.go:123] Gathering logs for kube-controller-manager [8eff891e4c56] ...
	I1011 15:03:49.955704    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff891e4c56"
	I1011 15:03:49.966585    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:03:49.966597    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:03:49.979114    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:03:49.979125    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:03:50.017805    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:03:50.017909    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:03:50.018388    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:03:50.018398    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:03:50.058858    4700 logs.go:123] Gathering logs for coredns [6105a62dc060] ...
	I1011 15:03:50.058871    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6105a62dc060"
	I1011 15:03:50.071611    4700 logs.go:123] Gathering logs for kube-scheduler [3e8ced358756] ...
	I1011 15:03:50.071623    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8ced358756"
	I1011 15:03:50.082887    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:03:50.082898    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:03:50.107401    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:03:50.107411    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:03:50.111560    4700 logs.go:123] Gathering logs for kube-apiserver [dc72a658b8c9] ...
	I1011 15:03:50.111566    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc72a658b8c9"
	I1011 15:03:50.125677    4700 logs.go:123] Gathering logs for etcd [9f0e46648c4a] ...
	I1011 15:03:50.125687    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0e46648c4a"
	I1011 15:03:50.139384    4700 logs.go:123] Gathering logs for kube-scheduler [92f60d23dbb0] ...
	I1011 15:03:50.139395    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f60d23dbb0"
	I1011 15:03:50.150704    4700 logs.go:123] Gathering logs for kube-apiserver [24f46358727d] ...
	I1011 15:03:50.150716    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f46358727d"
	I1011 15:03:50.164366    4700 logs.go:123] Gathering logs for kube-proxy [3da92cc90a0f] ...
	I1011 15:03:50.164378    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3da92cc90a0f"
	I1011 15:03:50.175944    4700 logs.go:123] Gathering logs for etcd [ddb08b4b5869] ...
	I1011 15:03:50.175954    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddb08b4b5869"
	I1011 15:03:50.189169    4700 logs.go:123] Gathering logs for kube-controller-manager [ab10164156ed] ...
	I1011 15:03:50.189179    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab10164156ed"
	I1011 15:03:50.206647    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:03:50.206656    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:03:50.206681    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:03:50.206685    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:03:50.206689    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:03:50.206704    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:03:50.206709    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:04:00.209503    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:05.211810    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:05.211892    4700 kubeadm.go:597] duration metric: took 4m8.288367917s to restartPrimaryControlPlane
	W1011 15:04:05.211941    4700 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 15:04:05.211975    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1011 15:04:06.113349    4700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 15:04:06.118735    4700 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 15:04:06.121678    4700 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 15:04:06.124509    4700 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 15:04:06.124515    4700 kubeadm.go:157] found existing configuration files:
	
	I1011 15:04:06.124545    4700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/admin.conf
	I1011 15:04:06.127208    4700 kubeadm.go:163] "https://control-plane.minikube.internal:57235" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 15:04:06.127243    4700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 15:04:06.129683    4700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/kubelet.conf
	I1011 15:04:06.132251    4700 kubeadm.go:163] "https://control-plane.minikube.internal:57235" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 15:04:06.132276    4700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 15:04:06.135110    4700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/controller-manager.conf
	I1011 15:04:06.137395    4700 kubeadm.go:163] "https://control-plane.minikube.internal:57235" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 15:04:06.137416    4700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 15:04:06.140454    4700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/scheduler.conf
	I1011 15:04:06.143518    4700 kubeadm.go:163] "https://control-plane.minikube.internal:57235" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 15:04:06.143550    4700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 15:04:06.146121    4700 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 15:04:06.163058    4700 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1011 15:04:06.163087    4700 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 15:04:06.209801    4700 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 15:04:06.209855    4700 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 15:04:06.209913    4700 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 15:04:06.270042    4700 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 15:04:06.274154    4700 out.go:235]   - Generating certificates and keys ...
	I1011 15:04:06.274189    4700 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 15:04:06.274219    4700 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 15:04:06.274257    4700 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 15:04:06.274309    4700 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 15:04:06.274346    4700 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 15:04:06.274370    4700 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 15:04:06.274491    4700 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 15:04:06.274534    4700 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 15:04:06.274569    4700 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 15:04:06.274607    4700 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 15:04:06.274628    4700 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 15:04:06.274709    4700 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 15:04:06.346365    4700 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 15:04:06.511958    4700 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 15:04:06.674459    4700 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 15:04:06.762037    4700 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 15:04:06.791013    4700 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 15:04:06.791364    4700 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 15:04:06.791384    4700 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 15:04:06.878957    4700 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 15:04:06.881556    4700 out.go:235]   - Booting up control plane ...
	I1011 15:04:06.881620    4700 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 15:04:06.882297    4700 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 15:04:06.882894    4700 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 15:04:06.883206    4700 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 15:04:06.884071    4700 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 15:04:11.390990    4700 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.506847 seconds
	I1011 15:04:11.391056    4700 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 15:04:11.395047    4700 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 15:04:11.904178    4700 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 15:04:11.904323    4700 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-130000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 15:04:12.408206    4700 kubeadm.go:310] [bootstrap-token] Using token: jg91qw.k5j416b1pvlje6xg
	I1011 15:04:12.413856    4700 out.go:235]   - Configuring RBAC rules ...
	I1011 15:04:12.413928    4700 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 15:04:12.413976    4700 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 15:04:12.420221    4700 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 15:04:12.421085    4700 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 15:04:12.421923    4700 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 15:04:12.422820    4700 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 15:04:12.426137    4700 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 15:04:12.611514    4700 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 15:04:12.812882    4700 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 15:04:12.813365    4700 kubeadm.go:310] 
	I1011 15:04:12.813395    4700 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 15:04:12.813402    4700 kubeadm.go:310] 
	I1011 15:04:12.813448    4700 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 15:04:12.813454    4700 kubeadm.go:310] 
	I1011 15:04:12.813469    4700 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 15:04:12.813501    4700 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 15:04:12.813528    4700 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 15:04:12.813533    4700 kubeadm.go:310] 
	I1011 15:04:12.813563    4700 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 15:04:12.813567    4700 kubeadm.go:310] 
	I1011 15:04:12.813608    4700 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 15:04:12.813612    4700 kubeadm.go:310] 
	I1011 15:04:12.813635    4700 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 15:04:12.813669    4700 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 15:04:12.813713    4700 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 15:04:12.813717    4700 kubeadm.go:310] 
	I1011 15:04:12.813757    4700 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 15:04:12.813801    4700 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 15:04:12.813806    4700 kubeadm.go:310] 
	I1011 15:04:12.813847    4700 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jg91qw.k5j416b1pvlje6xg \
	I1011 15:04:12.813910    4700 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ff7372af64c3996e800eaf522c3eb51c544993254bf1d45ae249aa6259e8117f \
	I1011 15:04:12.813923    4700 kubeadm.go:310] 	--control-plane 
	I1011 15:04:12.813926    4700 kubeadm.go:310] 
	I1011 15:04:12.813967    4700 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 15:04:12.813974    4700 kubeadm.go:310] 
	I1011 15:04:12.814030    4700 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jg91qw.k5j416b1pvlje6xg \
	I1011 15:04:12.814084    4700 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ff7372af64c3996e800eaf522c3eb51c544993254bf1d45ae249aa6259e8117f 
	I1011 15:04:12.814146    4700 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 15:04:12.814206    4700 cni.go:84] Creating CNI manager for ""
	I1011 15:04:12.814217    4700 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 15:04:12.818353    4700 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 15:04:12.824332    4700 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 15:04:12.827549    4700 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 15:04:12.832136    4700 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 15:04:12.832184    4700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 15:04:12.832213    4700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-130000 minikube.k8s.io/updated_at=2024_10_11T15_04_12_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=running-upgrade-130000 minikube.k8s.io/primary=true
	I1011 15:04:12.876817    4700 kubeadm.go:1113] duration metric: took 44.675ms to wait for elevateKubeSystemPrivileges
	I1011 15:04:12.876831    4700 ops.go:34] apiserver oom_adj: -16
	I1011 15:04:12.876839    4700 kubeadm.go:394] duration metric: took 4m15.983905709s to StartCluster
	I1011 15:04:12.876850    4700 settings.go:142] acquiring lock: {Name:mka75dc1604295e2b491b48ad476a4c06f6cece7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:04:12.876961    4700 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:04:12.877422    4700 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/kubeconfig: {Name:mkc848521291f94f61a80272f8eb43a8779805e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:04:12.877605    4700 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:04:12.877632    4700 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 15:04:12.877664    4700 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-130000"
	I1011 15:04:12.877674    4700 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-130000"
	W1011 15:04:12.877677    4700 addons.go:243] addon storage-provisioner should already be in state true
	I1011 15:04:12.877674    4700 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-130000"
	I1011 15:04:12.877685    4700 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-130000"
	I1011 15:04:12.877691    4700 host.go:66] Checking if "running-upgrade-130000" exists ...
	I1011 15:04:12.877794    4700 config.go:182] Loaded profile config "running-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 15:04:12.878818    4700 kapi.go:59] client config for running-upgrade-130000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/client.key", CAFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104662e40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1011 15:04:12.879185    4700 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-130000"
	W1011 15:04:12.879190    4700 addons.go:243] addon default-storageclass should already be in state true
	I1011 15:04:12.879196    4700 host.go:66] Checking if "running-upgrade-130000" exists ...
	I1011 15:04:12.881291    4700 out.go:177] * Verifying Kubernetes components...
	I1011 15:04:12.881719    4700 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 15:04:12.885439    4700 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 15:04:12.885447    4700 sshutil.go:53] new ssh client: &{IP:localhost Port:57203 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/running-upgrade-130000/id_rsa Username:docker}
	I1011 15:04:12.888235    4700 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 15:04:12.892336    4700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 15:04:12.896336    4700 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 15:04:12.896343    4700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 15:04:12.896349    4700 sshutil.go:53] new ssh client: &{IP:localhost Port:57203 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/running-upgrade-130000/id_rsa Username:docker}
	I1011 15:04:12.992170    4700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 15:04:12.999049    4700 api_server.go:52] waiting for apiserver process to appear ...
	I1011 15:04:12.999116    4700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 15:04:13.003418    4700 api_server.go:72] duration metric: took 125.80275ms to wait for apiserver process to appear ...
	I1011 15:04:13.003426    4700 api_server.go:88] waiting for apiserver healthz status ...
	I1011 15:04:13.003435    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:13.026543    4700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 15:04:13.052374    4700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 15:04:13.381200    4700 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1011 15:04:13.381212    4700 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1011 15:04:18.005469    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:18.005536    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:23.005837    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:23.005898    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:28.006673    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:28.006695    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:33.007231    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:33.007281    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:38.008066    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:38.008099    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:43.009026    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:43.009067    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1011 15:04:43.382096    4700 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1011 15:04:43.386269    4700 out.go:177] * Enabled addons: storage-provisioner
	I1011 15:04:43.394236    4700 addons.go:510] duration metric: took 30.517085292s for enable addons: enabled=[storage-provisioner]
	I1011 15:04:48.010286    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:48.010327    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:53.011942    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:53.011965    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:58.013704    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:58.013727    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:03.015711    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:03.015737    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:08.017838    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:08.017877    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:13.020085    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:13.020183    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:13.031494    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:05:13.031576    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:13.042694    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:05:13.042772    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:13.053304    4700 logs.go:282] 2 containers: [7f1165bcc644 eb84c0e2fa42]
	I1011 15:05:13.053380    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:13.067006    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:05:13.067085    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:13.077814    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:05:13.077887    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:13.088335    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:05:13.088399    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:13.098792    4700 logs.go:282] 0 containers: []
	W1011 15:05:13.098802    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:13.098865    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:13.109160    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:05:13.109174    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:13.109180    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:13.150292    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:05:13.150304    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:05:13.164687    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:05:13.164698    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:05:13.176380    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:05:13.176392    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:05:13.191899    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:05:13.191909    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:05:13.209720    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:05:13.209734    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:13.221363    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:13.221378    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:05:13.240110    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:05:13.240201    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:05:13.257621    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:05:13.257627    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:05:13.273862    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:05:13.273872    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:05:13.285165    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:05:13.285178    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:05:13.308491    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:05:13.308506    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:05:13.320199    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:13.320210    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:13.343620    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:13.343627    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:13.347669    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:05:13.347677    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:05:13.347700    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:05:13.347706    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:05:13.347709    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:05:13.347713    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:05:13.347716    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:05:23.351753    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:28.354071    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:28.354522    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:28.384104    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:05:28.384236    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:28.402485    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:05:28.402589    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:28.416157    4700 logs.go:282] 2 containers: [7f1165bcc644 eb84c0e2fa42]
	I1011 15:05:28.416238    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:28.428148    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:05:28.428227    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:28.438798    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:05:28.438861    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:28.450134    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:05:28.450209    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:28.463952    4700 logs.go:282] 0 containers: []
	W1011 15:05:28.463963    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:28.464028    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:28.474637    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:05:28.474653    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:05:28.474659    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:05:28.486356    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:05:28.486366    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:05:28.501561    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:28.501571    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:28.526387    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:28.526397    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:28.531072    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:05:28.531080    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:05:28.545439    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:05:28.545449    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:05:28.559182    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:05:28.559192    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:05:28.570492    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:05:28.570503    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:05:28.582384    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:05:28.582399    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:05:28.599617    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:05:28.599631    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:05:28.611154    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:05:28.611164    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:28.622766    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:28.622780    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:05:28.642877    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:05:28.642970    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:05:28.660206    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:28.660213    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:28.695088    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:05:28.695099    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:05:28.695126    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:05:28.695132    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:05:28.695135    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:05:28.695142    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:05:28.695145    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:05:38.699195    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:43.701067    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:43.701565    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:43.737556    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:05:43.737700    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:43.757141    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:05:43.757256    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:43.776097    4700 logs.go:282] 2 containers: [7f1165bcc644 eb84c0e2fa42]
	I1011 15:05:43.776184    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:43.787853    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:05:43.787930    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:43.798611    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:05:43.798684    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:43.809224    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:05:43.809302    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:43.819928    4700 logs.go:282] 0 containers: []
	W1011 15:05:43.819944    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:43.820008    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:43.830520    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:05:43.830535    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:05:43.830540    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:05:43.848884    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:05:43.848896    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:05:43.861265    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:05:43.861276    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:05:43.873687    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:43.873698    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:43.878340    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:43.878347    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:43.914264    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:05:43.914275    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:05:43.928736    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:05:43.928747    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:05:43.940820    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:05:43.940832    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:05:43.956336    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:05:43.956345    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:05:43.979323    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:05:43.979335    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:05:43.991934    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:43.991945    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:44.015638    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:44.015646    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:05:44.032874    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:05:44.032970    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:05:44.050518    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:05:44.050527    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:44.062770    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:05:44.062781    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:05:44.062805    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:05:44.062810    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:05:44.062815    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:05:44.062820    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:05:44.062824    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:05:54.066778    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:59.068992    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:59.069155    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:59.083966    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:05:59.084059    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:59.096556    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:05:59.096638    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:59.108018    4700 logs.go:282] 2 containers: [7f1165bcc644 eb84c0e2fa42]
	I1011 15:05:59.108089    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:59.119791    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:05:59.119871    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:59.131629    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:05:59.131711    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:59.143334    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:05:59.143415    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:59.154049    4700 logs.go:282] 0 containers: []
	W1011 15:05:59.154060    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:59.154126    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:59.165165    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:05:59.165182    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:59.165189    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:59.169885    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:59.169893    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:59.210357    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:05:59.210369    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:05:59.222564    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:05:59.222579    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:05:59.241835    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:05:59.241846    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:05:59.254296    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:05:59.254307    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:59.267252    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:59.267265    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:05:59.286799    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:05:59.286895    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:05:59.304247    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:05:59.304256    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:05:59.319202    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:05:59.319212    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:05:59.331578    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:05:59.331592    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:05:59.347341    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:05:59.347351    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:05:59.359807    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:59.359817    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:59.384917    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:05:59.384929    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:05:59.400190    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:05:59.400202    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:05:59.400224    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:05:59.400229    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:05:59.400231    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:05:59.400235    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:05:59.400238    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:06:09.402331    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:14.403887    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:14.404046    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:14.415967    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:06:14.416044    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:14.427460    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:06:14.427539    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:14.438554    4700 logs.go:282] 2 containers: [7f1165bcc644 eb84c0e2fa42]
	I1011 15:06:14.438631    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:14.449988    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:06:14.450062    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:14.461176    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:06:14.461255    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:14.474901    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:06:14.474974    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:14.485929    4700 logs.go:282] 0 containers: []
	W1011 15:06:14.485941    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:14.486013    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:14.497363    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:06:14.497377    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:06:14.497383    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:06:14.516906    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:06:14.516916    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:06:14.532025    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:06:14.532035    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:06:14.549308    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:06:14.549320    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:06:14.561899    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:06:14.561911    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:06:14.580108    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:06:14.580119    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:06:14.591937    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:14.591947    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:14.616708    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:14.616717    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:14.652694    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:06:14.652705    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:14.665758    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:14.665769    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:14.670703    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:06:14.670710    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:06:14.686736    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:06:14.686745    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:06:14.701407    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:14.701418    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:06:14.718983    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:06:14.719076    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:06:14.735992    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:06:14.736000    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:06:14.736025    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:06:14.736030    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:06:14.736042    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:06:14.736048    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:06:14.736052    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:06:24.740033    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:29.742541    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:29.742996    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:29.778011    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:06:29.778160    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:29.797678    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:06:29.797779    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:29.811730    4700 logs.go:282] 4 containers: [4fbadd8de248 5396e266a7e9 7f1165bcc644 eb84c0e2fa42]
	I1011 15:06:29.811818    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:29.823252    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:06:29.823337    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:29.833895    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:06:29.833962    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:29.844384    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:06:29.844464    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:29.854811    4700 logs.go:282] 0 containers: []
	W1011 15:06:29.854826    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:29.854883    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:29.865757    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:06:29.865775    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:06:29.865781    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:06:29.877949    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:06:29.877963    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:06:29.892461    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:06:29.892470    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:06:29.906116    4700 logs.go:123] Gathering logs for coredns [5396e266a7e9] ...
	I1011 15:06:29.906125    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5396e266a7e9"
	I1011 15:06:29.917275    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:29.917287    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:29.940715    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:06:29.940722    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:06:29.952698    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:06:29.952712    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:29.964113    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:29.964125    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:06:29.981987    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:06:29.982079    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:06:29.999529    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:29.999534    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:30.004089    4700 logs.go:123] Gathering logs for coredns [4fbadd8de248] ...
	I1011 15:06:30.004095    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fbadd8de248"
	I1011 15:06:30.018309    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:06:30.018320    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:06:30.038038    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:06:30.038049    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:06:30.049648    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:06:30.049658    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:06:30.066908    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:30.066923    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:30.103714    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:06:30.103727    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:06:30.125034    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:06:30.125044    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:06:30.125070    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:06:30.125074    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:06:30.125079    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:06:30.125082    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:06:30.125085    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:06:40.129118    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:45.131570    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:45.131839    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:45.153431    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:06:45.153529    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:45.170454    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:06:45.170542    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:45.187082    4700 logs.go:282] 4 containers: [4fbadd8de248 5396e266a7e9 7f1165bcc644 eb84c0e2fa42]
	I1011 15:06:45.187166    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:45.199251    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:06:45.199326    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:45.210514    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:06:45.210590    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:45.221682    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:06:45.221772    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:45.233453    4700 logs.go:282] 0 containers: []
	W1011 15:06:45.233464    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:45.233528    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:45.243609    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:06:45.243632    4700 logs.go:123] Gathering logs for coredns [5396e266a7e9] ...
	I1011 15:06:45.243637    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5396e266a7e9"
	I1011 15:06:45.255086    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:45.255095    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:45.279886    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:06:45.279892    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:06:45.291961    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:06:45.291975    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:06:45.303946    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:06:45.303959    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:06:45.320252    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:06:45.320266    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:06:45.332261    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:06:45.332270    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:45.345650    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:45.345664    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:06:45.363921    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:06:45.364014    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:06:45.382054    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:06:45.382060    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:06:45.396180    4700 logs.go:123] Gathering logs for coredns [4fbadd8de248] ...
	I1011 15:06:45.396191    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fbadd8de248"
	I1011 15:06:45.407520    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:45.407530    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:45.412208    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:06:45.412215    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:06:45.429832    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:45.429845    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:45.468374    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:06:45.468386    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:06:45.484702    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:06:45.484713    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:06:45.500791    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:06:45.500801    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:06:45.500826    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:06:45.500846    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:06:45.500852    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:06:45.500856    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:06:45.500862    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:06:55.503049    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:00.503472    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:00.503753    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:07:00.535098    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:07:00.535235    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:07:00.560085    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:07:00.560176    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:07:00.572343    4700 logs.go:282] 4 containers: [4fbadd8de248 5396e266a7e9 7f1165bcc644 eb84c0e2fa42]
	I1011 15:07:00.572422    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:07:00.583326    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:07:00.583409    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:07:00.598042    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:07:00.598133    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:07:00.636123    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:07:00.636202    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:07:00.647306    4700 logs.go:282] 0 containers: []
	W1011 15:07:00.647316    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:07:00.647382    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:07:00.657615    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:07:00.657634    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:07:00.657639    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:07:00.669589    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:07:00.669599    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:07:00.693914    4700 logs.go:123] Gathering logs for coredns [4fbadd8de248] ...
	I1011 15:07:00.693924    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fbadd8de248"
	I1011 15:07:00.706285    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:07:00.706297    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:07:00.741167    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:07:00.741179    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:07:00.755933    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:07:00.755942    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:07:00.773649    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:07:00.773743    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:07:00.790818    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:07:00.790824    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:07:00.804574    4700 logs.go:123] Gathering logs for coredns [5396e266a7e9] ...
	I1011 15:07:00.804585    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5396e266a7e9"
	I1011 15:07:00.816655    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:07:00.816666    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:07:00.828675    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:07:00.828685    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:07:00.840194    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:07:00.840206    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:07:00.855201    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:07:00.855211    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:07:00.866787    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:07:00.866799    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:07:00.871760    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:07:00.871766    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:07:00.883671    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:07:00.883684    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:07:00.902280    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:07:00.902290    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:07:00.902315    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:07:00.902320    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:07:00.902323    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:07:00.902327    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:07:00.902331    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:07:10.906328    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:15.908612    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:15.908766    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:07:15.919819    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:07:15.919899    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:07:15.930632    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:07:15.930703    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:07:15.941682    4700 logs.go:282] 4 containers: [4fbadd8de248 5396e266a7e9 7f1165bcc644 eb84c0e2fa42]
	I1011 15:07:15.941764    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:07:15.952201    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:07:15.952266    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:07:15.962406    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:07:15.962479    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:07:15.973549    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:07:15.973624    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:07:15.983498    4700 logs.go:282] 0 containers: []
	W1011 15:07:15.983509    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:07:15.983574    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:07:15.994278    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:07:15.994297    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:07:15.994303    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:07:16.008592    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:07:16.008605    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:07:16.021119    4700 logs.go:123] Gathering logs for coredns [4fbadd8de248] ...
	I1011 15:07:16.021129    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fbadd8de248"
	I1011 15:07:16.033750    4700 logs.go:123] Gathering logs for coredns [5396e266a7e9] ...
	I1011 15:07:16.033762    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5396e266a7e9"
	I1011 15:07:16.045773    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:07:16.045785    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:07:16.068324    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:07:16.068337    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:07:16.080091    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:07:16.080104    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:07:16.099648    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:07:16.099742    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:07:16.116895    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:07:16.116902    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:07:16.121313    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:07:16.121323    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:07:16.135300    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:07:16.135311    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:07:16.147089    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:07:16.147098    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:07:16.171702    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:07:16.171712    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:07:16.206066    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:07:16.206076    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:07:16.218842    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:07:16.218856    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:07:16.234128    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:07:16.234140    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:07:16.245697    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:07:16.245710    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:07:16.245739    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:07:16.245744    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:07:16.245748    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:07:16.245751    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:07:16.245754    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:07:26.249520    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:31.251724    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:31.251828    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:07:31.263517    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:07:31.263618    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:07:31.274437    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:07:31.274519    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:07:31.285460    4700 logs.go:282] 4 containers: [4fbadd8de248 5396e266a7e9 7f1165bcc644 eb84c0e2fa42]
	I1011 15:07:31.285543    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:07:31.296933    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:07:31.297016    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:07:31.308060    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:07:31.308139    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:07:31.321013    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:07:31.321090    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:07:31.331837    4700 logs.go:282] 0 containers: []
	W1011 15:07:31.331849    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:07:31.331911    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:07:31.343743    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:07:31.343760    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:07:31.343765    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:07:31.348331    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:07:31.348338    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:07:31.368504    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:07:31.368598    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:07:31.386629    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:07:31.386637    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:07:31.404725    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:07:31.404741    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:07:31.429953    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:07:31.429973    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:07:31.442726    4700 logs.go:123] Gathering logs for coredns [5396e266a7e9] ...
	I1011 15:07:31.442739    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5396e266a7e9"
	I1011 15:07:31.456596    4700 logs.go:123] Gathering logs for coredns [4fbadd8de248] ...
	I1011 15:07:31.456608    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fbadd8de248"
	I1011 15:07:31.470018    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:07:31.470030    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:07:31.486134    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:07:31.486148    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:07:31.523176    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:07:31.523188    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:07:31.537927    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:07:31.537941    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:07:31.552373    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:07:31.552385    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:07:31.564888    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:07:31.564900    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:07:31.578096    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:07:31.578107    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:07:31.591320    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:07:31.591331    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:07:31.606723    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:07:31.606734    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:07:31.606762    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:07:31.606769    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:07:31.606772    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:07:31.606776    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:07:31.606779    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:07:41.610744    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:46.612920    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:46.613025    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:07:46.624023    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:07:46.624102    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:07:46.634315    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:07:46.634386    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:07:46.645303    4700 logs.go:282] 4 containers: [4fbadd8de248 5396e266a7e9 7f1165bcc644 eb84c0e2fa42]
	I1011 15:07:46.645393    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:07:46.656046    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:07:46.656127    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:07:46.672268    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:07:46.672341    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:07:46.682630    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:07:46.682704    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:07:46.695381    4700 logs.go:282] 0 containers: []
	W1011 15:07:46.695395    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:07:46.695472    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:07:46.705762    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:07:46.705780    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:07:46.705784    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:07:46.720342    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:07:46.720353    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:07:46.736091    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:07:46.736101    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:07:46.748473    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:07:46.748482    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:07:46.787578    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:07:46.787589    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:07:46.799901    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:07:46.799915    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:07:46.824887    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:07:46.824895    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:07:46.838749    4700 logs.go:123] Gathering logs for coredns [4fbadd8de248] ...
	I1011 15:07:46.838760    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fbadd8de248"
	I1011 15:07:46.850720    4700 logs.go:123] Gathering logs for coredns [5396e266a7e9] ...
	I1011 15:07:46.850731    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5396e266a7e9"
	I1011 15:07:46.863988    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:07:46.864002    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:07:46.878314    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:07:46.878326    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:07:46.895734    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:07:46.895746    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:07:46.914427    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:07:46.914522    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:07:46.932115    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:07:46.932122    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:07:46.944216    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:07:46.944227    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:07:46.956235    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:07:46.956246    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:07:46.961205    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:07:46.961216    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:07:46.961241    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:07:46.961245    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:07:46.961249    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:07:46.961252    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:07:46.961255    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:07:56.963601    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:08:01.965746    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:01.965849    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:08:01.992473    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:08:01.992567    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:08:02.014494    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:08:02.014576    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:08:02.025598    4700 logs.go:282] 4 containers: [4fbadd8de248 5396e266a7e9 7f1165bcc644 eb84c0e2fa42]
	I1011 15:08:02.025685    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:08:02.036284    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:08:02.036361    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:08:02.046501    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:08:02.046587    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:08:02.057773    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:08:02.057843    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:08:02.068418    4700 logs.go:282] 0 containers: []
	W1011 15:08:02.068428    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:08:02.068490    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:08:02.079380    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:08:02.079397    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:08:02.079404    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:08:02.098555    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:08:02.098649    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:08:02.115756    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:08:02.115761    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:08:02.150925    4700 logs.go:123] Gathering logs for coredns [4fbadd8de248] ...
	I1011 15:08:02.150936    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fbadd8de248"
	I1011 15:08:02.162643    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:08:02.162655    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:08:02.174386    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:08:02.174397    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:08:02.187120    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:08:02.187131    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:08:02.199055    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:08:02.199069    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:08:02.213329    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:08:02.213342    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:08:02.225068    4700 logs.go:123] Gathering logs for coredns [5396e266a7e9] ...
	I1011 15:08:02.225078    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5396e266a7e9"
	I1011 15:08:02.243921    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:08:02.243931    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:08:02.261622    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:08:02.261633    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:08:02.273478    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:08:02.273489    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:08:02.278619    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:08:02.278625    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:08:02.292844    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:08:02.292853    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:08:02.308001    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:08:02.308011    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:08:02.332470    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:08:02.332479    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:08:02.332505    4700 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 15:08:02.332510    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:08:02.332513    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	  Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:08:02.332516    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:08:02.332519    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:08:12.336499    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:08:17.338714    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:17.342501    4700 out.go:201] 
	W1011 15:08:17.347341    4700 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1011 15:08:17.347358    4700 out.go:270] * 
	* 
	W1011 15:08:17.348332    4700 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:08:17.358328    4700 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-130000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-10-11 15:08:17.439266 -0700 PDT m=+4246.868380376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-130000 -n running-upgrade-130000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-130000 -n running-upgrade-130000: exit status 2 (15.57889725s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-130000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-818000          | force-systemd-flag-818000 | jenkins | v1.34.0 | 11 Oct 24 14:58 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-075000              | force-systemd-env-075000  | jenkins | v1.34.0 | 11 Oct 24 14:58 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-075000           | force-systemd-env-075000  | jenkins | v1.34.0 | 11 Oct 24 14:58 PDT | 11 Oct 24 14:58 PDT |
	| start   | -p docker-flags-785000                | docker-flags-785000       | jenkins | v1.34.0 | 11 Oct 24 14:58 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-818000             | force-systemd-flag-818000 | jenkins | v1.34.0 | 11 Oct 24 14:58 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-818000          | force-systemd-flag-818000 | jenkins | v1.34.0 | 11 Oct 24 14:58 PDT | 11 Oct 24 14:58 PDT |
	| start   | -p cert-expiration-534000             | cert-expiration-534000    | jenkins | v1.34.0 | 11 Oct 24 14:58 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-785000 ssh               | docker-flags-785000       | jenkins | v1.34.0 | 11 Oct 24 14:58 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-785000 ssh               | docker-flags-785000       | jenkins | v1.34.0 | 11 Oct 24 14:58 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-785000                | docker-flags-785000       | jenkins | v1.34.0 | 11 Oct 24 14:58 PDT | 11 Oct 24 14:58 PDT |
	| start   | -p cert-options-754000                | cert-options-754000       | jenkins | v1.34.0 | 11 Oct 24 14:58 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-754000 ssh               | cert-options-754000       | jenkins | v1.34.0 | 11 Oct 24 14:58 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-754000 -- sudo        | cert-options-754000       | jenkins | v1.34.0 | 11 Oct 24 14:58 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-754000                | cert-options-754000       | jenkins | v1.34.0 | 11 Oct 24 14:58 PDT | 11 Oct 24 14:58 PDT |
	| start   | -p running-upgrade-130000             | minikube                  | jenkins | v1.26.0 | 11 Oct 24 14:58 PDT | 11 Oct 24 14:59 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-130000             | running-upgrade-130000    | jenkins | v1.34.0 | 11 Oct 24 14:59 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-534000             | cert-expiration-534000    | jenkins | v1.34.0 | 11 Oct 24 15:01 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-534000             | cert-expiration-534000    | jenkins | v1.34.0 | 11 Oct 24 15:01 PDT | 11 Oct 24 15:01 PDT |
	| start   | -p kubernetes-upgrade-463000          | kubernetes-upgrade-463000 | jenkins | v1.34.0 | 11 Oct 24 15:01 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-463000          | kubernetes-upgrade-463000 | jenkins | v1.34.0 | 11 Oct 24 15:01 PDT | 11 Oct 24 15:01 PDT |
	| start   | -p kubernetes-upgrade-463000          | kubernetes-upgrade-463000 | jenkins | v1.34.0 | 11 Oct 24 15:01 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-463000          | kubernetes-upgrade-463000 | jenkins | v1.34.0 | 11 Oct 24 15:02 PDT | 11 Oct 24 15:02 PDT |
	| start   | -p stopped-upgrade-583000             | minikube                  | jenkins | v1.26.0 | 11 Oct 24 15:02 PDT | 11 Oct 24 15:02 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-583000 stop           | minikube                  | jenkins | v1.26.0 | 11 Oct 24 15:02 PDT | 11 Oct 24 15:02 PDT |
	| start   | -p stopped-upgrade-583000             | stopped-upgrade-583000    | jenkins | v1.34.0 | 11 Oct 24 15:02 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 15:02:55
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 15:02:55.536075    5145 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:02:55.536229    5145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:02:55.536233    5145 out.go:358] Setting ErrFile to fd 2...
	I1011 15:02:55.536236    5145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:02:55.536360    5145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:02:55.537412    5145 out.go:352] Setting JSON to false
	I1011 15:02:55.556703    5145 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5545,"bootTime":1728678630,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:02:55.556792    5145 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:02:55.561200    5145 out.go:177] * [stopped-upgrade-583000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:02:55.569095    5145 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:02:55.569144    5145 notify.go:220] Checking for updates...
	I1011 15:02:55.576998    5145 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:02:55.580031    5145 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:02:55.583966    5145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:02:55.587042    5145 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:02:55.590042    5145 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:02:55.593282    5145 config.go:182] Loaded profile config "stopped-upgrade-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 15:02:55.597033    5145 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1011 15:02:55.598289    5145 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:02:55.602042    5145 out.go:177] * Using the qemu2 driver based on existing profile
	I1011 15:02:55.608873    5145 start.go:297] selected driver: qemu2
	I1011 15:02:55.608880    5145 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-583000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57470 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-583000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1011 15:02:55.608955    5145 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:02:55.611831    5145 cni.go:84] Creating CNI manager for ""
	I1011 15:02:55.611872    5145 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 15:02:55.611894    5145 start.go:340] cluster config:
	{Name:stopped-upgrade-583000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57470 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-583000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1011 15:02:55.611951    5145 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:02:55.620012    5145 out.go:177] * Starting "stopped-upgrade-583000" primary control-plane node in "stopped-upgrade-583000" cluster
	I1011 15:02:55.624004    5145 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1011 15:02:55.624017    5145 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1011 15:02:55.624023    5145 cache.go:56] Caching tarball of preloaded images
	I1011 15:02:55.624080    5145 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:02:55.624086    5145 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1011 15:02:55.624139    5145 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/config.json ...
	I1011 15:02:55.624560    5145 start.go:360] acquireMachinesLock for stopped-upgrade-583000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:02:55.624590    5145 start.go:364] duration metric: took 24.917µs to acquireMachinesLock for "stopped-upgrade-583000"
	I1011 15:02:55.624600    5145 start.go:96] Skipping create...Using existing machine configuration
	I1011 15:02:55.624605    5145 fix.go:54] fixHost starting: 
	I1011 15:02:55.624716    5145 fix.go:112] recreateIfNeeded on stopped-upgrade-583000: state=Stopped err=<nil>
	W1011 15:02:55.624726    5145 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 15:02:55.632041    5145 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-583000" ...
	I1011 15:02:55.636039    5145 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:02:55.636121    5145 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/qemu.pid -nic user,model=virtio,hostfwd=tcp::57437-:22,hostfwd=tcp::57438-:2376,hostname=stopped-upgrade-583000 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/disk.qcow2
	I1011 15:02:55.683568    5145 main.go:141] libmachine: STDOUT: 
	I1011 15:02:55.683702    5145 main.go:141] libmachine: STDERR: 
	I1011 15:02:55.683715    5145 main.go:141] libmachine: Waiting for VM to start (ssh -p 57437 docker@127.0.0.1)...
	I1011 15:02:58.730903    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:03.733687    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:03.733881    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:03:03.748530    4700 logs.go:282] 2 containers: [24f46358727d dc72a658b8c9]
	I1011 15:03:03.748612    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:03:03.761623    4700 logs.go:282] 2 containers: [9f0e46648c4a ddb08b4b5869]
	I1011 15:03:03.761718    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:03:03.772795    4700 logs.go:282] 1 containers: [6105a62dc060]
	I1011 15:03:03.772873    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:03:03.783695    4700 logs.go:282] 2 containers: [92f60d23dbb0 3e8ced358756]
	I1011 15:03:03.783783    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:03:03.794542    4700 logs.go:282] 1 containers: [3da92cc90a0f]
	I1011 15:03:03.794621    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:03:03.805656    4700 logs.go:282] 2 containers: [ab10164156ed 8eff891e4c56]
	I1011 15:03:03.805742    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:03:03.816794    4700 logs.go:282] 0 containers: []
	W1011 15:03:03.816807    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:03:03.816875    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:03:03.827102    4700 logs.go:282] 0 containers: []
	W1011 15:03:03.827114    4700 logs.go:284] No container was found matching "storage-provisioner"
	I1011 15:03:03.827124    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:03:03.827133    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:03:03.865334    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:03:03.865436    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:03:03.865919    4700 logs.go:123] Gathering logs for kube-apiserver [dc72a658b8c9] ...
	I1011 15:03:03.865924    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc72a658b8c9"
	I1011 15:03:03.882058    4700 logs.go:123] Gathering logs for kube-controller-manager [8eff891e4c56] ...
	I1011 15:03:03.882068    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff891e4c56"
	I1011 15:03:03.893282    4700 logs.go:123] Gathering logs for kube-apiserver [24f46358727d] ...
	I1011 15:03:03.893295    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f46358727d"
	I1011 15:03:03.908545    4700 logs.go:123] Gathering logs for etcd [ddb08b4b5869] ...
	I1011 15:03:03.908556    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddb08b4b5869"
	I1011 15:03:03.922291    4700 logs.go:123] Gathering logs for coredns [6105a62dc060] ...
	I1011 15:03:03.922301    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6105a62dc060"
	I1011 15:03:03.933785    4700 logs.go:123] Gathering logs for kube-proxy [3da92cc90a0f] ...
	I1011 15:03:03.933796    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3da92cc90a0f"
	I1011 15:03:03.945799    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:03:03.945811    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:03:03.957858    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:03:03.957870    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:03:04.032523    4700 logs.go:123] Gathering logs for etcd [9f0e46648c4a] ...
	I1011 15:03:04.032534    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0e46648c4a"
	I1011 15:03:04.046727    4700 logs.go:123] Gathering logs for kube-controller-manager [ab10164156ed] ...
	I1011 15:03:04.046740    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab10164156ed"
	I1011 15:03:04.068864    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:03:04.068879    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:03:04.074060    4700 logs.go:123] Gathering logs for kube-scheduler [92f60d23dbb0] ...
	I1011 15:03:04.074067    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f60d23dbb0"
	I1011 15:03:04.086345    4700 logs.go:123] Gathering logs for kube-scheduler [3e8ced358756] ...
	I1011 15:03:04.086356    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8ced358756"
	I1011 15:03:04.098999    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:03:04.099011    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:03:04.124397    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:03:04.124408    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:03:04.124446    4700 out.go:270] X Problems detected in kubelet:
	W1011 15:03:04.124452    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:03:04.124455    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:03:04.124459    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:03:04.124461    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:03:15.445838    5145 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/config.json ...
	I1011 15:03:15.446159    5145 machine.go:93] provisionDockerMachine start ...
	I1011 15:03:15.446248    5145 main.go:141] libmachine: Using SSH client type: native
	I1011 15:03:15.446429    5145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100526480] 0x100528cc0 <nil>  [] 0s} localhost 57437 <nil> <nil>}
	I1011 15:03:15.446435    5145 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 15:03:15.514280    5145 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 15:03:15.514296    5145 buildroot.go:166] provisioning hostname "stopped-upgrade-583000"
	I1011 15:03:15.514356    5145 main.go:141] libmachine: Using SSH client type: native
	I1011 15:03:15.514468    5145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100526480] 0x100528cc0 <nil>  [] 0s} localhost 57437 <nil> <nil>}
	I1011 15:03:15.514475    5145 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-583000 && echo "stopped-upgrade-583000" | sudo tee /etc/hostname
	I1011 15:03:14.127968    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:15.582019    5145 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-583000
	
	I1011 15:03:15.582077    5145 main.go:141] libmachine: Using SSH client type: native
	I1011 15:03:15.582188    5145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100526480] 0x100528cc0 <nil>  [] 0s} localhost 57437 <nil> <nil>}
	I1011 15:03:15.582196    5145 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-583000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-583000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-583000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 15:03:15.649038    5145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 15:03:15.649051    5145 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19749-1186/.minikube CaCertPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19749-1186/.minikube}
	I1011 15:03:15.649058    5145 buildroot.go:174] setting up certificates
	I1011 15:03:15.649063    5145 provision.go:84] configureAuth start
	I1011 15:03:15.649071    5145 provision.go:143] copyHostCerts
	I1011 15:03:15.649144    5145 exec_runner.go:144] found /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.pem, removing ...
	I1011 15:03:15.649151    5145 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.pem
	I1011 15:03:15.649362    5145 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.pem (1078 bytes)
	I1011 15:03:15.649567    5145 exec_runner.go:144] found /Users/jenkins/minikube-integration/19749-1186/.minikube/cert.pem, removing ...
	I1011 15:03:15.649572    5145 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19749-1186/.minikube/cert.pem
	I1011 15:03:15.649615    5145 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19749-1186/.minikube/cert.pem (1123 bytes)
	I1011 15:03:15.649718    5145 exec_runner.go:144] found /Users/jenkins/minikube-integration/19749-1186/.minikube/key.pem, removing ...
	I1011 15:03:15.649721    5145 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19749-1186/.minikube/key.pem
	I1011 15:03:15.649760    5145 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19749-1186/.minikube/key.pem (1675 bytes)
	I1011 15:03:15.649848    5145 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-583000 san=[127.0.0.1 localhost minikube stopped-upgrade-583000]
	I1011 15:03:15.769863    5145 provision.go:177] copyRemoteCerts
	I1011 15:03:15.769911    5145 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 15:03:15.769919    5145 sshutil.go:53] new ssh client: &{IP:localhost Port:57437 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/id_rsa Username:docker}
	I1011 15:03:15.804141    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1011 15:03:15.810828    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 15:03:15.817437    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1011 15:03:15.824607    5145 provision.go:87] duration metric: took 175.535792ms to configureAuth
	I1011 15:03:15.824616    5145 buildroot.go:189] setting minikube options for container-runtime
	I1011 15:03:15.824737    5145 config.go:182] Loaded profile config "stopped-upgrade-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 15:03:15.824787    5145 main.go:141] libmachine: Using SSH client type: native
	I1011 15:03:15.824877    5145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100526480] 0x100528cc0 <nil>  [] 0s} localhost 57437 <nil> <nil>}
	I1011 15:03:15.824881    5145 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1011 15:03:15.889963    5145 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1011 15:03:15.889972    5145 buildroot.go:70] root file system type: tmpfs
	I1011 15:03:15.890025    5145 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1011 15:03:15.890081    5145 main.go:141] libmachine: Using SSH client type: native
	I1011 15:03:15.890180    5145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100526480] 0x100528cc0 <nil>  [] 0s} localhost 57437 <nil> <nil>}
	I1011 15:03:15.890213    5145 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1011 15:03:15.958753    5145 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1011 15:03:15.958813    5145 main.go:141] libmachine: Using SSH client type: native
	I1011 15:03:15.958932    5145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100526480] 0x100528cc0 <nil>  [] 0s} localhost 57437 <nil> <nil>}
	I1011 15:03:15.958942    5145 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1011 15:03:16.334926    5145 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1011 15:03:16.334940    5145 machine.go:96] duration metric: took 888.78925ms to provisionDockerMachine
	I1011 15:03:16.334946    5145 start.go:293] postStartSetup for "stopped-upgrade-583000" (driver="qemu2")
	I1011 15:03:16.334954    5145 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 15:03:16.335042    5145 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 15:03:16.335053    5145 sshutil.go:53] new ssh client: &{IP:localhost Port:57437 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/id_rsa Username:docker}
	I1011 15:03:16.369047    5145 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 15:03:16.370303    5145 info.go:137] Remote host: Buildroot 2021.02.12
	I1011 15:03:16.370310    5145 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19749-1186/.minikube/addons for local assets ...
	I1011 15:03:16.370382    5145 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19749-1186/.minikube/files for local assets ...
	I1011 15:03:16.370477    5145 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19749-1186/.minikube/files/etc/ssl/certs/17072.pem -> 17072.pem in /etc/ssl/certs
	I1011 15:03:16.370584    5145 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 15:03:16.373376    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/files/etc/ssl/certs/17072.pem --> /etc/ssl/certs/17072.pem (1708 bytes)
	I1011 15:03:16.380397    5145 start.go:296] duration metric: took 45.446209ms for postStartSetup
	I1011 15:03:16.380411    5145 fix.go:56] duration metric: took 20.756134875s for fixHost
	I1011 15:03:16.380455    5145 main.go:141] libmachine: Using SSH client type: native
	I1011 15:03:16.380573    5145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100526480] 0x100528cc0 <nil>  [] 0s} localhost 57437 <nil> <nil>}
	I1011 15:03:16.380577    5145 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 15:03:16.446187    5145 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728684196.936639546
	
	I1011 15:03:16.446198    5145 fix.go:216] guest clock: 1728684196.936639546
	I1011 15:03:16.446202    5145 fix.go:229] Guest: 2024-10-11 15:03:16.936639546 -0700 PDT Remote: 2024-10-11 15:03:16.380413 -0700 PDT m=+20.866889834 (delta=556.226546ms)
	I1011 15:03:16.446216    5145 fix.go:200] guest clock delta is within tolerance: 556.226546ms
	I1011 15:03:16.446222    5145 start.go:83] releasing machines lock for "stopped-upgrade-583000", held for 20.821955292s
	I1011 15:03:16.446292    5145 ssh_runner.go:195] Run: cat /version.json
	I1011 15:03:16.446297    5145 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 15:03:16.446300    5145 sshutil.go:53] new ssh client: &{IP:localhost Port:57437 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/id_rsa Username:docker}
	I1011 15:03:16.446317    5145 sshutil.go:53] new ssh client: &{IP:localhost Port:57437 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/id_rsa Username:docker}
	W1011 15:03:16.446810    5145 sshutil.go:64] dial failure (will retry): dial tcp [::1]:57437: connect: connection refused
	I1011 15:03:16.446828    5145 retry.go:31] will retry after 289.694169ms: dial tcp [::1]:57437: connect: connection refused
	W1011 15:03:16.477695    5145 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1011 15:03:16.477743    5145 ssh_runner.go:195] Run: systemctl --version
	I1011 15:03:16.479652    5145 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 15:03:16.481276    5145 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 15:03:16.481313    5145 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1011 15:03:16.484527    5145 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1011 15:03:16.489373    5145 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 15:03:16.489381    5145 start.go:495] detecting cgroup driver to use...
	I1011 15:03:16.489462    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 15:03:16.496659    5145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1011 15:03:16.500439    5145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1011 15:03:16.503587    5145 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1011 15:03:16.503618    5145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1011 15:03:16.509032    5145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 15:03:16.512606    5145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1011 15:03:16.515667    5145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 15:03:16.518776    5145 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 15:03:16.521728    5145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1011 15:03:16.524700    5145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1011 15:03:16.527833    5145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1011 15:03:16.530783    5145 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 15:03:16.533895    5145 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 15:03:16.538596    5145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 15:03:16.611704    5145 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1011 15:03:16.617572    5145 start.go:495] detecting cgroup driver to use...
	I1011 15:03:16.617655    5145 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1011 15:03:16.624052    5145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 15:03:16.629294    5145 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 15:03:16.635310    5145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 15:03:16.639489    5145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1011 15:03:16.643876    5145 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1011 15:03:16.711007    5145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1011 15:03:16.716329    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 15:03:16.721974    5145 ssh_runner.go:195] Run: which cri-dockerd
	I1011 15:03:16.723260    5145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1011 15:03:16.725814    5145 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1011 15:03:16.730706    5145 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1011 15:03:16.799892    5145 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1011 15:03:16.934140    5145 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1011 15:03:16.934198    5145 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1011 15:03:16.939259    5145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 15:03:17.028077    5145 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1011 15:03:18.183944    5145 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.155864583s)
	I1011 15:03:18.184029    5145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1011 15:03:18.188616    5145 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1011 15:03:18.194810    5145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1011 15:03:18.199543    5145 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1011 15:03:18.268262    5145 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1011 15:03:18.345005    5145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 15:03:18.421358    5145 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1011 15:03:18.427913    5145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1011 15:03:18.432647    5145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 15:03:18.507687    5145 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1011 15:03:18.546182    5145 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1011 15:03:18.546274    5145 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1011 15:03:18.548339    5145 start.go:563] Will wait 60s for crictl version
	I1011 15:03:18.548413    5145 ssh_runner.go:195] Run: which crictl
	I1011 15:03:18.549851    5145 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 15:03:18.566060    5145 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1011 15:03:18.566136    5145 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1011 15:03:18.583469    5145 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1011 15:03:18.607825    5145 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1011 15:03:18.607974    5145 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1011 15:03:18.609306    5145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 15:03:18.613185    5145 kubeadm.go:883] updating cluster {Name:stopped-upgrade-583000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57470 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-583000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1011 15:03:18.613238    5145 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1011 15:03:18.613285    5145 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1011 15:03:18.623956    5145 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1011 15:03:18.623965    5145 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1011 15:03:18.624031    5145 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1011 15:03:18.627256    5145 ssh_runner.go:195] Run: which lz4
	I1011 15:03:18.628480    5145 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 15:03:18.629678    5145 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 15:03:18.629687    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1011 15:03:19.591818    5145 docker.go:653] duration metric: took 963.398167ms to copy over tarball
	I1011 15:03:19.591907    5145 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 15:03:19.129208    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:19.129306    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:03:19.141458    4700 logs.go:282] 2 containers: [24f46358727d dc72a658b8c9]
	I1011 15:03:19.141542    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:03:19.153397    4700 logs.go:282] 2 containers: [9f0e46648c4a ddb08b4b5869]
	I1011 15:03:19.153481    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:03:19.165555    4700 logs.go:282] 1 containers: [6105a62dc060]
	I1011 15:03:19.165643    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:03:19.177704    4700 logs.go:282] 2 containers: [92f60d23dbb0 3e8ced358756]
	I1011 15:03:19.177801    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:03:19.190042    4700 logs.go:282] 1 containers: [3da92cc90a0f]
	I1011 15:03:19.190129    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:03:19.202070    4700 logs.go:282] 2 containers: [ab10164156ed 8eff891e4c56]
	I1011 15:03:19.202150    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:03:19.213165    4700 logs.go:282] 0 containers: []
	W1011 15:03:19.213176    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:03:19.213249    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:03:19.224447    4700 logs.go:282] 0 containers: []
	W1011 15:03:19.224458    4700 logs.go:284] No container was found matching "storage-provisioner"
	I1011 15:03:19.224468    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:03:19.224475    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:03:19.229926    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:03:19.229937    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:03:19.275090    4700 logs.go:123] Gathering logs for kube-apiserver [24f46358727d] ...
	I1011 15:03:19.275101    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f46358727d"
	I1011 15:03:19.291209    4700 logs.go:123] Gathering logs for kube-controller-manager [ab10164156ed] ...
	I1011 15:03:19.291222    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab10164156ed"
	I1011 15:03:19.310681    4700 logs.go:123] Gathering logs for kube-apiserver [dc72a658b8c9] ...
	I1011 15:03:19.310692    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc72a658b8c9"
	I1011 15:03:19.326808    4700 logs.go:123] Gathering logs for kube-scheduler [3e8ced358756] ...
	I1011 15:03:19.326820    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8ced358756"
	I1011 15:03:19.339995    4700 logs.go:123] Gathering logs for kube-controller-manager [8eff891e4c56] ...
	I1011 15:03:19.340007    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff891e4c56"
	I1011 15:03:19.352756    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:03:19.352777    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:03:19.391997    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:03:19.392096    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:03:19.392585    4700 logs.go:123] Gathering logs for etcd [9f0e46648c4a] ...
	I1011 15:03:19.392591    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0e46648c4a"
	I1011 15:03:19.408227    4700 logs.go:123] Gathering logs for coredns [6105a62dc060] ...
	I1011 15:03:19.408239    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6105a62dc060"
	I1011 15:03:19.421527    4700 logs.go:123] Gathering logs for kube-proxy [3da92cc90a0f] ...
	I1011 15:03:19.421538    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3da92cc90a0f"
	I1011 15:03:19.434780    4700 logs.go:123] Gathering logs for etcd [ddb08b4b5869] ...
	I1011 15:03:19.434792    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddb08b4b5869"
	I1011 15:03:19.449860    4700 logs.go:123] Gathering logs for kube-scheduler [92f60d23dbb0] ...
	I1011 15:03:19.449872    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f60d23dbb0"
	I1011 15:03:19.462825    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:03:19.462837    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:03:19.489367    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:03:19.489380    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:03:19.502446    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:03:19.502458    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:03:19.502487    4700 out.go:270] X Problems detected in kubelet:
	W1011 15:03:19.502492    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:03:19.502496    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:03:19.502500    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:03:19.502503    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:03:20.791859    5145 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.199946208s)
	I1011 15:03:20.791873    5145 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 15:03:20.807349    5145 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1011 15:03:20.810139    5145 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1011 15:03:20.815124    5145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 15:03:20.897696    5145 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1011 15:03:22.428426    5145 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.530737667s)
	I1011 15:03:22.428539    5145 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1011 15:03:22.444073    5145 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1011 15:03:22.444082    5145 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1011 15:03:22.444090    5145 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 15:03:22.450522    5145 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 15:03:22.451674    5145 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1011 15:03:22.452773    5145 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 15:03:22.453069    5145 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1011 15:03:22.454685    5145 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1011 15:03:22.454828    5145 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1011 15:03:22.456374    5145 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1011 15:03:22.456809    5145 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1011 15:03:22.457351    5145 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1011 15:03:22.457454    5145 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1011 15:03:22.458785    5145 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1011 15:03:22.459172    5145 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1011 15:03:22.459671    5145 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1011 15:03:22.459916    5145 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1011 15:03:22.461378    5145 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1011 15:03:22.461457    5145 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1011 15:03:23.039730    5145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1011 15:03:23.042473    5145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1011 15:03:23.051972    5145 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1011 15:03:23.052003    5145 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1011 15:03:23.052069    5145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1011 15:03:23.059651    5145 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1011 15:03:23.059684    5145 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1011 15:03:23.059725    5145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1011 15:03:23.065031    5145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1011 15:03:23.070373    5145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1011 15:03:23.071103    5145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1011 15:03:23.081148    5145 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1011 15:03:23.081173    5145 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1011 15:03:23.081226    5145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1011 15:03:23.093179    5145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1011 15:03:23.118331    5145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1011 15:03:23.128594    5145 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1011 15:03:23.128623    5145 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1011 15:03:23.128683    5145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1011 15:03:23.138344    5145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1011 15:03:23.138487    5145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1011 15:03:23.139980    5145 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1011 15:03:23.139992    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1011 15:03:23.148827    5145 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1011 15:03:23.148835    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1011 15:03:23.154918    5145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1011 15:03:23.183881    5145 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1011 15:03:23.183924    5145 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1011 15:03:23.183942    5145 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1011 15:03:23.184003    5145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1011 15:03:23.195023    5145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W1011 15:03:23.228583    5145 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1011 15:03:23.228751    5145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1011 15:03:23.238978    5145 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1011 15:03:23.239000    5145 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1011 15:03:23.239069    5145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1011 15:03:23.249571    5145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1011 15:03:23.249727    5145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1011 15:03:23.251222    5145 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1011 15:03:23.251236    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1011 15:03:23.292634    5145 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1011 15:03:23.292647    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1011 15:03:23.332335    5145 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1011 15:03:23.361751    5145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	W1011 15:03:23.370763    5145 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1011 15:03:23.370901    5145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 15:03:23.372373    5145 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1011 15:03:23.372392    5145 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1011 15:03:23.372435    5145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1011 15:03:23.384289    5145 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1011 15:03:23.384315    5145 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 15:03:23.384381    5145 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 15:03:23.391685    5145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1011 15:03:23.391831    5145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1011 15:03:23.401606    5145 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1011 15:03:23.401636    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1011 15:03:23.401672    5145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1011 15:03:23.401795    5145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1011 15:03:23.403743    5145 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1011 15:03:23.403764    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1011 15:03:23.471190    5145 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1011 15:03:23.471203    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1011 15:03:23.848898    5145 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1011 15:03:23.848927    5145 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1011 15:03:23.848935    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1011 15:03:23.987459    5145 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1011 15:03:23.987502    5145 cache_images.go:92] duration metric: took 1.543429291s to LoadCachedImages
	W1011 15:03:23.987546    5145 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1011 15:03:23.987552    5145 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1011 15:03:23.987609    5145 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-583000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-583000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 15:03:23.987688    5145 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1011 15:03:24.001605    5145 cni.go:84] Creating CNI manager for ""
	I1011 15:03:24.001617    5145 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 15:03:24.001623    5145 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 15:03:24.001631    5145 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-583000 NodeName:stopped-upgrade-583000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 15:03:24.001706    5145 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-583000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 15:03:24.001776    5145 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1011 15:03:24.005077    5145 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 15:03:24.005117    5145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 15:03:24.008377    5145 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1011 15:03:24.013495    5145 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 15:03:24.018697    5145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1011 15:03:24.024011    5145 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1011 15:03:24.025242    5145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 15:03:24.028960    5145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 15:03:24.109891    5145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 15:03:24.115529    5145 certs.go:68] Setting up /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000 for IP: 10.0.2.15
	I1011 15:03:24.115541    5145 certs.go:194] generating shared ca certs ...
	I1011 15:03:24.115550    5145 certs.go:226] acquiring lock for ca certs: {Name:mk35edffff951ee63400693cabf88751b6257cd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:03:24.115743    5145 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.key
	I1011 15:03:24.116475    5145 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/proxy-client-ca.key
	I1011 15:03:24.116483    5145 certs.go:256] generating profile certs ...
	I1011 15:03:24.116713    5145 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/client.key
	I1011 15:03:24.116730    5145 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.key.dabe18a6
	I1011 15:03:24.116743    5145 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.crt.dabe18a6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1011 15:03:24.188646    5145 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.crt.dabe18a6 ...
	I1011 15:03:24.188658    5145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.crt.dabe18a6: {Name:mke2e906f6aa60aa296960fd8012aab304f8de9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:03:24.189354    5145 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.key.dabe18a6 ...
	I1011 15:03:24.189361    5145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.key.dabe18a6: {Name:mk4e6f11d67b071a3f770925a637f8f17d79183f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:03:24.189538    5145 certs.go:381] copying /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.crt.dabe18a6 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.crt
	I1011 15:03:24.189666    5145 certs.go:385] copying /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.key.dabe18a6 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.key
	I1011 15:03:24.189904    5145 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/proxy-client.key
	I1011 15:03:24.190056    5145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/1707.pem (1338 bytes)
	W1011 15:03:24.190220    5145 certs.go:480] ignoring /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/1707_empty.pem, impossibly tiny 0 bytes
	I1011 15:03:24.190229    5145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca-key.pem (1679 bytes)
	I1011 15:03:24.190249    5145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem (1078 bytes)
	I1011 15:03:24.190272    5145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem (1123 bytes)
	I1011 15:03:24.190293    5145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/key.pem (1675 bytes)
	I1011 15:03:24.190345    5145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/files/etc/ssl/certs/17072.pem (1708 bytes)
	I1011 15:03:24.190706    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 15:03:24.197738    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 15:03:24.204873    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 15:03:24.212206    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 15:03:24.220284    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1011 15:03:24.226997    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 15:03:24.233310    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 15:03:24.240550    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 15:03:24.247683    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 15:03:24.253946    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/1707.pem --> /usr/share/ca-certificates/1707.pem (1338 bytes)
	I1011 15:03:24.261005    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/files/etc/ssl/certs/17072.pem --> /usr/share/ca-certificates/17072.pem (1708 bytes)
	I1011 15:03:24.268195    5145 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 15:03:24.273451    5145 ssh_runner.go:195] Run: openssl version
	I1011 15:03:24.275400    5145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 15:03:24.278240    5145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 15:03:24.279593    5145 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I1011 15:03:24.279616    5145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 15:03:24.281198    5145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 15:03:24.284560    5145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1707.pem && ln -fs /usr/share/ca-certificates/1707.pem /etc/ssl/certs/1707.pem"
	I1011 15:03:24.287751    5145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1707.pem
	I1011 15:03:24.289274    5145 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:05 /usr/share/ca-certificates/1707.pem
	I1011 15:03:24.289304    5145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1707.pem
	I1011 15:03:24.291276    5145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1707.pem /etc/ssl/certs/51391683.0"
	I1011 15:03:24.294236    5145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17072.pem && ln -fs /usr/share/ca-certificates/17072.pem /etc/ssl/certs/17072.pem"
	I1011 15:03:24.297429    5145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17072.pem
	I1011 15:03:24.298868    5145 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:05 /usr/share/ca-certificates/17072.pem
	I1011 15:03:24.298894    5145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17072.pem
	I1011 15:03:24.300473    5145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17072.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 15:03:24.303420    5145 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 15:03:24.304700    5145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 15:03:24.306702    5145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 15:03:24.308517    5145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 15:03:24.310515    5145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 15:03:24.312235    5145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 15:03:24.314646    5145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 15:03:24.316402    5145 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-583000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57470 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-583000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1011 15:03:24.316476    5145 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1011 15:03:24.326818    5145 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 15:03:24.329957    5145 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 15:03:24.329966    5145 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 15:03:24.329992    5145 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 15:03:24.332907    5145 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 15:03:24.333218    5145 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-583000" does not appear in /Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:03:24.333322    5145 kubeconfig.go:62] /Users/jenkins/minikube-integration/19749-1186/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-583000" cluster setting kubeconfig missing "stopped-upgrade-583000" context setting]
	I1011 15:03:24.333511    5145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/kubeconfig: {Name:mkc848521291f94f61a80272f8eb43a8779805e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:03:24.333951    5145 kapi.go:59] client config for stopped-upgrade-583000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/client.key", CAFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101f7ee40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1011 15:03:24.334428    5145 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 15:03:24.337392    5145 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-583000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1011 15:03:24.337398    5145 kubeadm.go:1160] stopping kube-system containers ...
	I1011 15:03:24.337445    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1011 15:03:24.348532    5145 docker.go:483] Stopping containers: [3147d798970d b001d59290a4 e5ff18c232f1 26a6947a1458 cd8a136a40f5 e7805c8a9be5 f6da21be1d5b d3912344e421]
	I1011 15:03:24.348605    5145 ssh_runner.go:195] Run: docker stop 3147d798970d b001d59290a4 e5ff18c232f1 26a6947a1458 cd8a136a40f5 e7805c8a9be5 f6da21be1d5b d3912344e421
	I1011 15:03:24.359581    5145 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 15:03:24.365419    5145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 15:03:24.368895    5145 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 15:03:24.368901    5145 kubeadm.go:157] found existing configuration files:
	
	I1011 15:03:24.368930    5145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/admin.conf
	I1011 15:03:24.372030    5145 kubeadm.go:163] "https://control-plane.minikube.internal:57470" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 15:03:24.372061    5145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 15:03:24.374648    5145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/kubelet.conf
	I1011 15:03:24.377179    5145 kubeadm.go:163] "https://control-plane.minikube.internal:57470" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 15:03:24.377207    5145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 15:03:24.380266    5145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/controller-manager.conf
	I1011 15:03:24.383037    5145 kubeadm.go:163] "https://control-plane.minikube.internal:57470" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 15:03:24.383071    5145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 15:03:24.385498    5145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/scheduler.conf
	I1011 15:03:24.388551    5145 kubeadm.go:163] "https://control-plane.minikube.internal:57470" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 15:03:24.388578    5145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 15:03:24.391440    5145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 15:03:24.394127    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 15:03:24.416147    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 15:03:24.754606    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 15:03:24.888364    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 15:03:24.909948    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 15:03:24.939810    5145 api_server.go:52] waiting for apiserver process to appear ...
	I1011 15:03:24.939904    5145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 15:03:25.441682    5145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 15:03:25.940131    5145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 15:03:25.944789    5145 api_server.go:72] duration metric: took 1.004989625s to wait for apiserver process to appear ...
	I1011 15:03:25.944801    5145 api_server.go:88] waiting for apiserver healthz status ...
	I1011 15:03:25.944816    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:29.506470    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:30.946835    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:30.946878    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:34.508668    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:34.508840    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:03:34.524142    4700 logs.go:282] 2 containers: [24f46358727d dc72a658b8c9]
	I1011 15:03:34.524224    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:03:34.535140    4700 logs.go:282] 2 containers: [9f0e46648c4a ddb08b4b5869]
	I1011 15:03:34.535222    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:03:34.545235    4700 logs.go:282] 1 containers: [6105a62dc060]
	I1011 15:03:34.545314    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:03:34.557071    4700 logs.go:282] 2 containers: [92f60d23dbb0 3e8ced358756]
	I1011 15:03:34.557146    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:03:34.567402    4700 logs.go:282] 1 containers: [3da92cc90a0f]
	I1011 15:03:34.567468    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:03:34.577904    4700 logs.go:282] 2 containers: [ab10164156ed 8eff891e4c56]
	I1011 15:03:34.577985    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:03:34.588108    4700 logs.go:282] 0 containers: []
	W1011 15:03:34.588119    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:03:34.588185    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:03:34.597816    4700 logs.go:282] 0 containers: []
	W1011 15:03:34.597828    4700 logs.go:284] No container was found matching "storage-provisioner"
	I1011 15:03:34.597835    4700 logs.go:123] Gathering logs for coredns [6105a62dc060] ...
	I1011 15:03:34.597845    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6105a62dc060"
	I1011 15:03:34.611498    4700 logs.go:123] Gathering logs for kube-proxy [3da92cc90a0f] ...
	I1011 15:03:34.611510    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3da92cc90a0f"
	I1011 15:03:34.623646    4700 logs.go:123] Gathering logs for kube-apiserver [24f46358727d] ...
	I1011 15:03:34.623658    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f46358727d"
	I1011 15:03:34.642764    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:03:34.642778    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:03:34.647063    4700 logs.go:123] Gathering logs for etcd [9f0e46648c4a] ...
	I1011 15:03:34.647072    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0e46648c4a"
	I1011 15:03:34.661179    4700 logs.go:123] Gathering logs for kube-controller-manager [ab10164156ed] ...
	I1011 15:03:34.661193    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab10164156ed"
	I1011 15:03:34.679703    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:03:34.679713    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:03:34.717024    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:03:34.717122    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:03:34.717607    4700 logs.go:123] Gathering logs for kube-scheduler [3e8ced358756] ...
	I1011 15:03:34.717612    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8ced358756"
	I1011 15:03:34.731917    4700 logs.go:123] Gathering logs for kube-controller-manager [8eff891e4c56] ...
	I1011 15:03:34.731933    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff891e4c56"
	I1011 15:03:34.743962    4700 logs.go:123] Gathering logs for kube-apiserver [dc72a658b8c9] ...
	I1011 15:03:34.743976    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc72a658b8c9"
	I1011 15:03:34.756093    4700 logs.go:123] Gathering logs for etcd [ddb08b4b5869] ...
	I1011 15:03:34.756103    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddb08b4b5869"
	I1011 15:03:34.769605    4700 logs.go:123] Gathering logs for kube-scheduler [92f60d23dbb0] ...
	I1011 15:03:34.769616    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f60d23dbb0"
	I1011 15:03:34.782015    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:03:34.782026    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:03:34.806620    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:03:34.806635    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:03:34.818465    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:03:34.818475    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:03:34.852913    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:03:34.852922    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:03:34.852953    4700 out.go:270] X Problems detected in kubelet:
	W1011 15:03:34.852959    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:03:34.852963    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:03:34.852967    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:03:34.852972    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:03:35.947359    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:35.947405    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:40.947907    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:40.947966    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:44.856951    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:45.948663    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:45.948684    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:49.859152    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:49.859403    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:03:49.877515    4700 logs.go:282] 2 containers: [24f46358727d dc72a658b8c9]
	I1011 15:03:49.877625    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:03:49.891366    4700 logs.go:282] 2 containers: [9f0e46648c4a ddb08b4b5869]
	I1011 15:03:49.891444    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:03:49.903097    4700 logs.go:282] 1 containers: [6105a62dc060]
	I1011 15:03:49.903179    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:03:49.913552    4700 logs.go:282] 2 containers: [92f60d23dbb0 3e8ced358756]
	I1011 15:03:49.913622    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:03:49.923985    4700 logs.go:282] 1 containers: [3da92cc90a0f]
	I1011 15:03:49.924060    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:03:49.934341    4700 logs.go:282] 2 containers: [ab10164156ed 8eff891e4c56]
	I1011 15:03:49.934425    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:03:49.944382    4700 logs.go:282] 0 containers: []
	W1011 15:03:49.944397    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:03:49.944458    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:03:49.955678    4700 logs.go:282] 0 containers: []
	W1011 15:03:49.955689    4700 logs.go:284] No container was found matching "storage-provisioner"
	I1011 15:03:49.955699    4700 logs.go:123] Gathering logs for kube-controller-manager [8eff891e4c56] ...
	I1011 15:03:49.955704    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8eff891e4c56"
	I1011 15:03:49.966585    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:03:49.966597    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:03:49.979114    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:03:49.979125    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:03:50.017805    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:03:50.017909    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:03:50.018388    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:03:50.018398    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:03:50.058858    4700 logs.go:123] Gathering logs for coredns [6105a62dc060] ...
	I1011 15:03:50.058871    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6105a62dc060"
	I1011 15:03:50.071611    4700 logs.go:123] Gathering logs for kube-scheduler [3e8ced358756] ...
	I1011 15:03:50.071623    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8ced358756"
	I1011 15:03:50.082887    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:03:50.082898    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:03:50.107401    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:03:50.107411    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:03:50.111560    4700 logs.go:123] Gathering logs for kube-apiserver [dc72a658b8c9] ...
	I1011 15:03:50.111566    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc72a658b8c9"
	I1011 15:03:50.125677    4700 logs.go:123] Gathering logs for etcd [9f0e46648c4a] ...
	I1011 15:03:50.125687    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0e46648c4a"
	I1011 15:03:50.139384    4700 logs.go:123] Gathering logs for kube-scheduler [92f60d23dbb0] ...
	I1011 15:03:50.139395    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f60d23dbb0"
	I1011 15:03:50.150704    4700 logs.go:123] Gathering logs for kube-apiserver [24f46358727d] ...
	I1011 15:03:50.150716    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f46358727d"
	I1011 15:03:50.164366    4700 logs.go:123] Gathering logs for kube-proxy [3da92cc90a0f] ...
	I1011 15:03:50.164378    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3da92cc90a0f"
	I1011 15:03:50.175944    4700 logs.go:123] Gathering logs for etcd [ddb08b4b5869] ...
	I1011 15:03:50.175954    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddb08b4b5869"
	I1011 15:03:50.189169    4700 logs.go:123] Gathering logs for kube-controller-manager [ab10164156ed] ...
	I1011 15:03:50.189179    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab10164156ed"
	I1011 15:03:50.206647    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:03:50.206656    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:03:50.206681    4700 out.go:270] X Problems detected in kubelet:
	W1011 15:03:50.206685    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:03:50.206689    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:03:50.206704    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:03:50.206709    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:03:50.949522    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:50.949618    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:55.951182    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:55.951296    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:00.209503    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:00.953520    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:00.953631    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:05.211810    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:05.211892    4700 kubeadm.go:597] duration metric: took 4m8.288367917s to restartPrimaryControlPlane
	W1011 15:04:05.211941    4700 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 15:04:05.211975    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1011 15:04:06.113349    4700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 15:04:06.118735    4700 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 15:04:06.121678    4700 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 15:04:06.124509    4700 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 15:04:06.124515    4700 kubeadm.go:157] found existing configuration files:
	
	I1011 15:04:06.124545    4700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/admin.conf
	I1011 15:04:06.127208    4700 kubeadm.go:163] "https://control-plane.minikube.internal:57235" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 15:04:06.127243    4700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 15:04:06.129683    4700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/kubelet.conf
	I1011 15:04:06.132251    4700 kubeadm.go:163] "https://control-plane.minikube.internal:57235" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 15:04:06.132276    4700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 15:04:06.135110    4700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/controller-manager.conf
	I1011 15:04:06.137395    4700 kubeadm.go:163] "https://control-plane.minikube.internal:57235" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 15:04:06.137416    4700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 15:04:06.140454    4700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/scheduler.conf
	I1011 15:04:06.143518    4700 kubeadm.go:163] "https://control-plane.minikube.internal:57235" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57235 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 15:04:06.143550    4700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 15:04:06.146121    4700 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 15:04:06.163058    4700 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1011 15:04:06.163087    4700 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 15:04:06.209801    4700 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 15:04:06.209855    4700 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 15:04:06.209913    4700 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 15:04:06.270042    4700 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 15:04:06.274154    4700 out.go:235]   - Generating certificates and keys ...
	I1011 15:04:06.274189    4700 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 15:04:06.274219    4700 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 15:04:06.274257    4700 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 15:04:06.274309    4700 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 15:04:06.274346    4700 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 15:04:06.274370    4700 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 15:04:06.274491    4700 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 15:04:06.274534    4700 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 15:04:06.274569    4700 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 15:04:06.274607    4700 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 15:04:06.274628    4700 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 15:04:06.274709    4700 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 15:04:06.346365    4700 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 15:04:06.511958    4700 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 15:04:06.674459    4700 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 15:04:06.762037    4700 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 15:04:06.791013    4700 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 15:04:06.791364    4700 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 15:04:06.791384    4700 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 15:04:06.878957    4700 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 15:04:05.956104    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:05.956127    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:06.881556    4700 out.go:235]   - Booting up control plane ...
	I1011 15:04:06.881620    4700 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 15:04:06.882297    4700 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 15:04:06.882894    4700 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 15:04:06.883206    4700 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 15:04:06.884071    4700 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 15:04:11.390990    4700 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.506847 seconds
	I1011 15:04:11.391056    4700 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 15:04:11.395047    4700 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 15:04:11.904178    4700 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 15:04:11.904323    4700 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-130000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 15:04:12.408206    4700 kubeadm.go:310] [bootstrap-token] Using token: jg91qw.k5j416b1pvlje6xg
	I1011 15:04:12.413856    4700 out.go:235]   - Configuring RBAC rules ...
	I1011 15:04:12.413928    4700 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 15:04:12.413976    4700 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 15:04:12.420221    4700 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 15:04:12.421085    4700 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 15:04:12.421923    4700 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 15:04:12.422820    4700 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 15:04:12.426137    4700 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 15:04:12.611514    4700 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 15:04:12.812882    4700 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 15:04:12.813365    4700 kubeadm.go:310] 
	I1011 15:04:12.813395    4700 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 15:04:12.813402    4700 kubeadm.go:310] 
	I1011 15:04:12.813448    4700 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 15:04:12.813454    4700 kubeadm.go:310] 
	I1011 15:04:12.813469    4700 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 15:04:12.813501    4700 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 15:04:12.813528    4700 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 15:04:12.813533    4700 kubeadm.go:310] 
	I1011 15:04:12.813563    4700 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 15:04:12.813567    4700 kubeadm.go:310] 
	I1011 15:04:12.813608    4700 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 15:04:12.813612    4700 kubeadm.go:310] 
	I1011 15:04:12.813635    4700 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 15:04:12.813669    4700 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 15:04:12.813713    4700 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 15:04:12.813717    4700 kubeadm.go:310] 
	I1011 15:04:12.813757    4700 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 15:04:12.813801    4700 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 15:04:12.813806    4700 kubeadm.go:310] 
	I1011 15:04:12.813847    4700 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jg91qw.k5j416b1pvlje6xg \
	I1011 15:04:12.813910    4700 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ff7372af64c3996e800eaf522c3eb51c544993254bf1d45ae249aa6259e8117f \
	I1011 15:04:12.813923    4700 kubeadm.go:310] 	--control-plane 
	I1011 15:04:12.813926    4700 kubeadm.go:310] 
	I1011 15:04:12.813967    4700 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 15:04:12.813974    4700 kubeadm.go:310] 
	I1011 15:04:12.814030    4700 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jg91qw.k5j416b1pvlje6xg \
	I1011 15:04:12.814084    4700 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ff7372af64c3996e800eaf522c3eb51c544993254bf1d45ae249aa6259e8117f 
	I1011 15:04:12.814146    4700 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 15:04:12.814206    4700 cni.go:84] Creating CNI manager for ""
	I1011 15:04:12.814217    4700 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 15:04:12.818353    4700 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 15:04:12.824332    4700 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 15:04:12.827549    4700 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 15:04:12.832136    4700 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 15:04:12.832184    4700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 15:04:12.832213    4700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-130000 minikube.k8s.io/updated_at=2024_10_11T15_04_12_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=running-upgrade-130000 minikube.k8s.io/primary=true
	I1011 15:04:12.876817    4700 kubeadm.go:1113] duration metric: took 44.675ms to wait for elevateKubeSystemPrivileges
	I1011 15:04:12.876831    4700 ops.go:34] apiserver oom_adj: -16
	I1011 15:04:12.876839    4700 kubeadm.go:394] duration metric: took 4m15.983905709s to StartCluster
	I1011 15:04:12.876850    4700 settings.go:142] acquiring lock: {Name:mka75dc1604295e2b491b48ad476a4c06f6cece7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:04:12.876961    4700 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:04:12.877422    4700 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/kubeconfig: {Name:mkc848521291f94f61a80272f8eb43a8779805e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:04:12.877605    4700 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:04:12.877632    4700 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 15:04:12.877664    4700 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-130000"
	I1011 15:04:12.877674    4700 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-130000"
	W1011 15:04:12.877677    4700 addons.go:243] addon storage-provisioner should already be in state true
	I1011 15:04:12.877674    4700 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-130000"
	I1011 15:04:12.877685    4700 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-130000"
	I1011 15:04:12.877691    4700 host.go:66] Checking if "running-upgrade-130000" exists ...
	I1011 15:04:12.877794    4700 config.go:182] Loaded profile config "running-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 15:04:12.878818    4700 kapi.go:59] client config for running-upgrade-130000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/running-upgrade-130000/client.key", CAFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104662e40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1011 15:04:12.879185    4700 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-130000"
	W1011 15:04:12.879190    4700 addons.go:243] addon default-storageclass should already be in state true
	I1011 15:04:12.879196    4700 host.go:66] Checking if "running-upgrade-130000" exists ...
	I1011 15:04:12.881291    4700 out.go:177] * Verifying Kubernetes components...
	I1011 15:04:12.881719    4700 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 15:04:12.885439    4700 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 15:04:12.885447    4700 sshutil.go:53] new ssh client: &{IP:localhost Port:57203 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/running-upgrade-130000/id_rsa Username:docker}
	I1011 15:04:12.888235    4700 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 15:04:10.958234    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:10.958258    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:12.892336    4700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 15:04:12.896336    4700 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 15:04:12.896343    4700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 15:04:12.896349    4700 sshutil.go:53] new ssh client: &{IP:localhost Port:57203 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/running-upgrade-130000/id_rsa Username:docker}
	I1011 15:04:12.992170    4700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 15:04:12.999049    4700 api_server.go:52] waiting for apiserver process to appear ...
	I1011 15:04:12.999116    4700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 15:04:13.003418    4700 api_server.go:72] duration metric: took 125.80275ms to wait for apiserver process to appear ...
	I1011 15:04:13.003426    4700 api_server.go:88] waiting for apiserver healthz status ...
	I1011 15:04:13.003435    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:13.026543    4700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 15:04:13.052374    4700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 15:04:13.381200    4700 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1011 15:04:13.381212    4700 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1011 15:04:15.959439    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:15.959483    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:18.005469    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:18.005536    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:20.961767    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:20.961816    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:23.005837    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:23.005898    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:25.963415    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:25.963555    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:04:25.978750    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:04:25.978832    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:04:25.995250    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:04:25.995331    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:04:26.006203    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:04:26.006291    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:04:26.018234    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:04:26.018322    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:04:26.028738    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:04:26.028821    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:04:26.039839    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:04:26.039925    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:04:26.049867    5145 logs.go:282] 0 containers: []
	W1011 15:04:26.049879    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:04:26.049945    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:04:26.061416    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:04:26.061434    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:04:26.061440    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:04:26.073341    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:04:26.073354    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:04:26.087943    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:04:26.087953    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:04:26.115634    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:04:26.115647    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:04:26.132129    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:04:26.132141    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:04:26.148956    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:04:26.148968    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:04:26.160188    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:04:26.160201    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:04:26.172991    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:04:26.173003    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:04:26.190402    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:04:26.190414    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:04:26.203403    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:04:26.203417    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:04:26.247598    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:04:26.247618    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:04:26.366912    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:04:26.366928    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:04:26.380466    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:04:26.380484    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:04:26.397818    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:04:26.397831    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:04:26.416773    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:04:26.416788    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:04:26.421956    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:04:26.421968    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:04:26.435685    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:04:26.435696    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:04:28.965993    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:28.006673    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:28.006695    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:33.968154    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:33.968428    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:04:33.995520    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:04:33.995666    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:04:34.018530    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:04:34.018619    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:04:34.031932    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:04:34.032039    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:04:34.043939    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:04:34.044021    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:04:34.056520    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:04:34.056586    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:04:34.067675    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:04:34.067756    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:04:34.077794    5145 logs.go:282] 0 containers: []
	W1011 15:04:34.077808    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:04:34.077875    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:04:34.089139    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:04:34.089156    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:04:34.089167    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:04:34.100690    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:04:34.100699    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:04:34.118617    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:04:34.118628    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:04:34.144196    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:04:34.144204    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:04:34.183666    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:04:34.183697    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:04:34.196702    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:04:34.196714    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:04:34.207870    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:04:34.207881    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:04:34.247063    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:04:34.247072    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:04:34.262353    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:04:34.262363    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:04:34.289565    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:04:34.289581    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:04:34.301070    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:04:34.301088    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:04:34.316412    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:04:34.316425    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:04:34.335625    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:04:34.335635    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:04:34.340349    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:04:34.340354    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:04:34.354776    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:04:34.354785    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:04:34.369183    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:04:34.369196    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:04:34.381452    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:04:34.381461    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:04:33.007231    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:33.007281    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:36.895074    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:38.008066    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:38.008099    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:43.009026    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:43.009067    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1011 15:04:43.382096    4700 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1011 15:04:43.386269    4700 out.go:177] * Enabled addons: storage-provisioner
	I1011 15:04:41.897298    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:41.897554    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:04:41.921313    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:04:41.921444    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:04:41.937247    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:04:41.937342    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:04:41.951713    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:04:41.951788    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:04:41.962693    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:04:41.962789    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:04:41.973809    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:04:41.973887    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:04:41.984707    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:04:41.984789    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:04:41.995489    5145 logs.go:282] 0 containers: []
	W1011 15:04:41.995499    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:04:41.995560    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:04:42.006263    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:04:42.006281    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:04:42.006288    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:04:42.019170    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:04:42.019184    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:04:42.043358    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:04:42.043368    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:04:42.059260    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:04:42.059273    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:04:42.073873    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:04:42.073882    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:04:42.092533    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:04:42.092545    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:04:42.105143    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:04:42.105172    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:04:42.117324    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:04:42.117339    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:04:42.142146    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:04:42.142156    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:04:42.160141    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:04:42.160150    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:04:42.175429    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:04:42.175439    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:04:42.179583    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:04:42.179589    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:04:42.216016    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:04:42.216030    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:04:42.229954    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:04:42.229964    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:04:42.240961    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:04:42.240971    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:04:42.278743    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:04:42.278751    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:04:42.295192    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:04:42.295202    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:04:44.808657    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:43.394236    4700 addons.go:510] duration metric: took 30.517085292s for enable addons: enabled=[storage-provisioner]
	I1011 15:04:49.810110    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:49.810380    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:04:49.837047    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:04:49.837172    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:04:49.852980    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:04:49.853078    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:04:49.867612    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:04:49.867709    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:04:49.883461    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:04:49.883542    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:04:49.894493    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:04:49.894570    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:04:49.905019    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:04:49.905101    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:04:49.915069    5145 logs.go:282] 0 containers: []
	W1011 15:04:49.915079    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:04:49.915146    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:04:49.925292    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:04:49.925310    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:04:49.925316    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:04:49.929646    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:04:49.929655    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:04:49.965614    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:04:49.965624    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:04:49.980256    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:04:49.980270    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:04:50.012990    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:04:50.013004    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:04:50.031157    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:04:50.031171    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:04:50.043424    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:04:50.043436    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:04:50.081195    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:04:50.081212    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:04:50.096199    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:04:50.096209    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:04:50.121882    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:04:50.121898    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:04:50.133376    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:04:50.133387    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:04:50.147678    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:04:50.147687    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:04:50.159107    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:04:50.159119    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:04:50.171061    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:04:50.171073    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:04:50.182998    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:04:50.183009    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:04:50.200193    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:04:50.200204    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:04:50.211850    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:04:50.211862    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:04:48.010286    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:48.010327    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:52.729369    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:53.011942    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:53.011965    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:57.730224    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:57.730405    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:04:57.746272    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:04:57.746362    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:04:57.758874    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:04:57.758954    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:04:57.770007    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:04:57.770089    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:04:57.780308    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:04:57.780386    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:04:57.790669    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:04:57.790738    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:04:57.801684    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:04:57.801767    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:04:57.811832    5145 logs.go:282] 0 containers: []
	W1011 15:04:57.811847    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:04:57.811917    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:04:57.822561    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:04:57.822576    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:04:57.822580    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:04:57.836124    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:04:57.836133    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:04:57.848013    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:04:57.848025    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:04:57.867273    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:04:57.867283    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:04:57.878918    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:04:57.878929    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:04:57.893109    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:04:57.893121    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:04:57.909676    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:04:57.909689    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:04:57.925335    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:04:57.925347    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:04:57.949497    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:04:57.949508    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:04:57.987288    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:04:57.987299    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:04:58.012890    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:04:58.012901    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:04:58.027633    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:04:58.027648    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:04:58.034600    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:04:58.034621    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:04:58.070901    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:04:58.070912    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:04:58.087395    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:04:58.087406    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:04:58.099243    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:04:58.099253    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:04:58.116707    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:04:58.116717    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:04:58.013704    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:58.013727    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:00.631225    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:03.015711    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:03.015737    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:05.633515    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:05.633980    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:05.669758    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:05:05.669958    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:05.695726    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:05:05.695842    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:05.710218    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:05:05.710307    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:05.721818    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:05:05.721891    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:05.732127    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:05:05.732206    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:05.742788    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:05:05.742854    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:05.753443    5145 logs.go:282] 0 containers: []
	W1011 15:05:05.753455    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:05.753517    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:05.765928    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:05:05.765946    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:05:05.765952    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:05:05.791593    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:05:05.791604    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:05:05.804210    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:05.804220    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:05.829646    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:05.829654    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:05:05.868697    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:05.868708    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:05.911970    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:05:05.911981    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:05:05.926104    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:05:05.926115    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:05:05.937569    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:05:05.937581    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:05:05.952774    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:05:05.952785    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:05:05.965230    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:05:05.965240    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:05:05.977240    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:05:05.977253    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:05:05.989367    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:05.989377    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:05.993656    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:05:05.993661    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:05:06.007511    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:05:06.007519    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:05:06.022249    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:05:06.022260    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:05:06.045503    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:05:06.045512    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:05:06.057457    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:05:06.057473    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:08.573568    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:08.017838    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:08.017877    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:13.576145    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:13.576301    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:13.589288    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:05:13.589370    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:13.600145    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:05:13.600225    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:13.611127    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:05:13.611206    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:13.624567    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:05:13.624642    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:13.634876    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:05:13.634953    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:13.645149    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:05:13.645222    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:13.655327    5145 logs.go:282] 0 containers: []
	W1011 15:05:13.655339    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:13.655405    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:13.666242    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:05:13.666260    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:13.666266    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:13.670468    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:05:13.670477    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:05:13.684399    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:05:13.684408    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:05:13.699918    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:05:13.699928    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:05:13.722594    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:13.722605    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:13.757013    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:05:13.757024    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:05:13.782445    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:05:13.782455    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:05:13.797138    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:05:13.797149    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:05:13.808939    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:05:13.808948    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:05:13.820286    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:05:13.820297    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:13.833460    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:13.833471    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:05:13.872596    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:05:13.872607    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:05:13.893861    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:05:13.893871    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:05:13.904877    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:05:13.904890    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:05:13.916265    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:05:13.916280    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:05:13.933792    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:05:13.933802    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:05:13.944962    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:13.944974    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:13.020085    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:13.020183    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:13.031494    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:05:13.031576    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:13.042694    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:05:13.042772    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:13.053304    4700 logs.go:282] 2 containers: [7f1165bcc644 eb84c0e2fa42]
	I1011 15:05:13.053380    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:13.067006    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:05:13.067085    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:13.077814    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:05:13.077887    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:13.088335    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:05:13.088399    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:13.098792    4700 logs.go:282] 0 containers: []
	W1011 15:05:13.098802    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:13.098865    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:13.109160    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:05:13.109174    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:13.109180    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:13.150292    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:05:13.150304    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:05:13.164687    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:05:13.164698    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:05:13.176380    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:05:13.176392    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:05:13.191899    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:05:13.191909    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:05:13.209720    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:05:13.209734    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:13.221363    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:13.221378    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:05:13.240110    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:05:13.240201    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:05:13.257621    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:05:13.257627    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:05:13.273862    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:05:13.273872    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:05:13.285165    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:05:13.285178    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:05:13.308491    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:05:13.308506    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:05:13.320199    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:13.320210    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:13.343620    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:13.343627    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:13.347669    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:05:13.347677    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:05:13.347700    4700 out.go:270] X Problems detected in kubelet:
	W1011 15:05:13.347706    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:05:13.347709    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:05:13.347713    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:05:13.347716    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:05:16.470969    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:21.473556    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:21.473989    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:21.503859    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:05:21.504006    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:21.522988    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:05:21.523081    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:21.538587    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:05:21.538665    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:21.550420    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:05:21.550526    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:21.561150    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:05:21.561232    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:21.573837    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:05:21.573925    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:21.585155    5145 logs.go:282] 0 containers: []
	W1011 15:05:21.585169    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:21.585234    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:21.596166    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:05:21.596185    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:05:21.596191    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:05:21.608039    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:05:21.608051    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:05:21.620703    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:05:21.620716    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:05:21.632773    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:05:21.632783    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:05:21.645233    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:05:21.645244    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:05:21.664975    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:05:21.664985    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:05:21.682691    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:05:21.682704    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:05:21.694814    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:21.694826    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:05:21.734626    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:05:21.734637    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:05:21.752143    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:05:21.752153    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:05:21.776849    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:05:21.776859    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:21.788684    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:05:21.788695    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:05:21.800630    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:21.800641    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:21.823835    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:21.823841    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:21.828333    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:21.828340    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:21.866314    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:05:21.866329    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:05:21.880588    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:05:21.880598    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:05:24.397220    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:23.351753    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:29.398052    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:29.398254    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:29.416363    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:05:29.416462    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:29.429872    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:05:29.429960    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:29.440972    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:05:29.441048    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:29.451702    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:05:29.451787    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:29.462383    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:05:29.462458    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:29.473136    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:05:29.473211    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:29.484087    5145 logs.go:282] 0 containers: []
	W1011 15:05:29.484099    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:29.484168    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:29.494657    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:05:29.494674    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:05:29.494679    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:05:29.505652    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:05:29.505665    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:05:29.523573    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:29.523583    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:29.548668    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:05:29.548675    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:05:29.563194    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:29.563209    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:05:29.603160    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:29.603171    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:29.637578    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:05:29.637591    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:05:29.651526    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:05:29.651539    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:05:29.665657    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:05:29.665666    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:05:29.680036    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:05:29.680045    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:29.692354    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:05:29.692366    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:05:29.717338    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:05:29.717347    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:05:29.728968    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:05:29.728978    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:05:29.741219    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:05:29.741228    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:05:29.752950    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:05:29.752959    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:05:29.764316    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:29.764330    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:29.768774    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:05:29.768781    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:05:28.354071    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:28.354522    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:28.384104    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:05:28.384236    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:28.402485    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:05:28.402589    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:28.416157    4700 logs.go:282] 2 containers: [7f1165bcc644 eb84c0e2fa42]
	I1011 15:05:28.416238    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:28.428148    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:05:28.428227    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:28.438798    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:05:28.438861    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:28.450134    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:05:28.450209    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:28.463952    4700 logs.go:282] 0 containers: []
	W1011 15:05:28.463963    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:28.464028    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:28.474637    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:05:28.474653    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:05:28.474659    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:05:28.486356    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:05:28.486366    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:05:28.501561    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:28.501571    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:28.526387    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:28.526397    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:28.531072    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:05:28.531080    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:05:28.545439    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:05:28.545449    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:05:28.559182    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:05:28.559192    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:05:28.570492    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:05:28.570503    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:05:28.582384    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:05:28.582399    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:05:28.599617    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:05:28.599631    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:05:28.611154    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:05:28.611164    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:28.622766    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:28.622780    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:05:28.642877    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:05:28.642970    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:05:28.660206    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:28.660213    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:28.695088    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:05:28.695099    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:05:28.695126    4700 out.go:270] X Problems detected in kubelet:
	W1011 15:05:28.695132    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:05:28.695135    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:05:28.695142    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:05:28.695145    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:05:32.280872    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:37.283092    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:37.283284    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:37.298373    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:05:37.298462    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:37.310563    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:05:37.310640    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:37.321077    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:05:37.321160    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:37.331235    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:05:37.331307    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:37.341775    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:05:37.341856    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:37.352344    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:05:37.352423    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:37.362460    5145 logs.go:282] 0 containers: []
	W1011 15:05:37.362470    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:37.362534    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:37.375235    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:05:37.375252    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:05:37.375257    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:05:37.392558    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:05:37.392567    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:05:37.403490    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:37.403500    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:37.427148    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:05:37.427157    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:05:37.438793    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:37.438802    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:37.474044    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:05:37.474054    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:05:37.489967    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:05:37.489978    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:05:37.500972    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:37.500983    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:37.505190    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:05:37.505199    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:05:37.519255    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:05:37.519264    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:05:37.544504    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:05:37.544516    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:05:37.558493    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:05:37.558505    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:05:37.572543    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:05:37.572556    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:05:37.583599    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:05:37.583611    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:05:37.595908    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:05:37.595920    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:37.610747    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:37.610758    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:05:37.651087    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:05:37.651096    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:05:40.164912    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:38.699195    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:45.167191    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:45.167373    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:45.178593    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:05:45.178684    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:45.189637    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:05:45.189713    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:45.202050    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:05:45.202128    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:45.212873    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:05:45.212961    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:45.224636    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:05:45.224715    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:45.237974    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:05:45.238044    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:45.248398    5145 logs.go:282] 0 containers: []
	W1011 15:05:45.248411    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:45.248475    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:45.259090    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:05:45.259111    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:45.259117    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:45.295815    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:05:45.295826    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:05:45.310113    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:05:45.310127    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:45.322872    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:45.322886    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:05:45.362038    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:05:45.362049    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:05:45.391519    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:05:45.391532    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:05:45.404086    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:05:45.404098    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:05:45.418536    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:05:45.418545    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:05:45.432974    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:45.432984    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:45.455874    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:45.455880    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:45.460256    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:05:45.460262    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:05:45.475447    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:05:45.475457    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:05:45.487371    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:05:45.487384    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:05:45.506107    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:05:45.506118    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:05:45.520943    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:05:45.520957    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:05:45.531839    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:05:45.531850    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:05:43.701067    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:43.701565    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:43.737556    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:05:43.737700    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:43.757141    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:05:43.757256    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:43.776097    4700 logs.go:282] 2 containers: [7f1165bcc644 eb84c0e2fa42]
	I1011 15:05:43.776184    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:43.787853    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:05:43.787930    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:43.798611    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:05:43.798684    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:43.809224    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:05:43.809302    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:43.819928    4700 logs.go:282] 0 containers: []
	W1011 15:05:43.819944    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:43.820008    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:43.830520    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:05:43.830535    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:05:43.830540    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:05:43.848884    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:05:43.848896    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:05:43.861265    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:05:43.861276    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:05:43.873687    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:43.873698    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:43.878340    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:43.878347    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:43.914264    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:05:43.914275    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:05:43.928736    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:05:43.928747    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:05:43.940820    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:05:43.940832    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:05:43.956336    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:05:43.956345    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:05:43.979323    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:05:43.979335    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:05:43.991934    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:43.991945    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:44.015638    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:44.015646    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:05:44.032874    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:05:44.032970    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:05:44.050518    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:05:44.050527    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:44.062770    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:05:44.062781    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:05:44.062805    4700 out.go:270] X Problems detected in kubelet:
	W1011 15:05:44.062810    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:05:44.062815    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:05:44.062820    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:05:44.062824    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:05:45.543350    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:05:45.543364    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:05:48.063829    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:53.065992    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:53.066229    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:53.093553    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:05:53.093647    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:53.107515    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:05:53.107602    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:53.119431    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:05:53.119512    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:53.130183    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:05:53.130259    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:53.140747    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:05:53.140821    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:53.150862    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:05:53.150931    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:53.161212    5145 logs.go:282] 0 containers: []
	W1011 15:05:53.161226    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:53.161287    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:53.172588    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:05:53.172610    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:05:53.172615    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:53.184722    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:05:53.184732    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:05:53.200464    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:05:53.200477    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:05:53.212342    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:05:53.212354    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:05:53.230515    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:05:53.230523    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:05:53.241590    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:53.241601    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:53.245640    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:05:53.245649    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:05:53.257638    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:05:53.257649    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:05:53.273512    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:05:53.273521    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:05:53.290813    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:05:53.290823    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:05:53.303397    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:53.303408    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:53.326395    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:53.326404    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:05:53.363085    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:05:53.363096    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:05:53.388789    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:05:53.388802    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:05:53.402648    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:53.402657    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:53.437381    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:05:53.437391    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:05:53.454897    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:05:53.454907    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:05:54.066778    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:55.971812    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:59.068992    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:59.069155    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:59.083966    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:05:59.084059    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:59.096556    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:05:59.096638    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:59.108018    4700 logs.go:282] 2 containers: [7f1165bcc644 eb84c0e2fa42]
	I1011 15:05:59.108089    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:59.119791    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:05:59.119871    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:59.131629    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:05:59.131711    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:59.143334    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:05:59.143415    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:59.154049    4700 logs.go:282] 0 containers: []
	W1011 15:05:59.154060    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:59.154126    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:59.165165    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:05:59.165182    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:59.165189    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:59.169885    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:59.169893    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:59.210357    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:05:59.210369    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:05:59.222564    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:05:59.222579    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:05:59.241835    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:05:59.241846    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:05:59.254296    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:05:59.254307    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:59.267252    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:59.267265    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:05:59.286799    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:05:59.286895    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:05:59.304247    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:05:59.304256    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:05:59.319202    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:05:59.319212    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:05:59.331578    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:05:59.331592    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:05:59.347341    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:05:59.347351    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:05:59.359807    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:59.359817    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:59.384917    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:05:59.384929    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:05:59.400190    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:05:59.400202    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:05:59.400224    4700 out.go:270] X Problems detected in kubelet:
	W1011 15:05:59.400229    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:05:59.400231    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:05:59.400235    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:05:59.400238    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:06:00.972820    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:00.973146    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:01.005318    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:06:01.005464    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:01.026704    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:06:01.026805    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:01.041296    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:06:01.041387    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:01.054776    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:06:01.054859    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:01.070406    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:06:01.070480    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:01.081901    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:06:01.081981    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:01.092522    5145 logs.go:282] 0 containers: []
	W1011 15:06:01.092534    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:01.092600    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:01.103477    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:06:01.103496    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:06:01.103501    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:06:01.117715    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:06:01.117725    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:06:01.129557    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:06:01.129573    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:06:01.155261    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:06:01.155271    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:06:01.167740    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:01.167751    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:01.190560    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:06:01.190571    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:06:01.205396    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:06:01.205410    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:06:01.220538    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:06:01.220551    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:06:01.234549    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:06:01.234559    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:06:01.249991    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:06:01.250002    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:06:01.261657    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:06:01.261669    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:06:01.273644    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:06:01.273656    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:01.285494    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:01.285504    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:01.289494    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:06:01.289501    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:06:01.300971    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:06:01.300981    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:06:01.325931    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:01.325941    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:01.361184    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:01.361197    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:06:03.900903    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:08.903254    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:08.903489    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:08.930505    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:06:08.930604    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:08.943628    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:06:08.943709    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:08.954518    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:06:08.954595    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:08.965090    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:06:08.965170    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:08.978922    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:06:08.978995    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:08.996667    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:06:08.996749    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:09.007088    5145 logs.go:282] 0 containers: []
	W1011 15:06:09.007099    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:09.007164    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:09.017927    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:06:09.017946    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:06:09.017952    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:06:09.029596    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:09.029611    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:09.035336    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:06:09.035344    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:06:09.046806    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:06:09.046819    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:06:09.069039    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:06:09.069049    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:06:09.086702    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:06:09.086715    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:06:09.098814    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:06:09.098825    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:06:09.110527    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:06:09.110537    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:06:09.125258    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:06:09.125268    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:06:09.157638    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:06:09.157649    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:09.173018    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:09.173031    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:06:09.210668    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:09.210679    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:09.245641    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:06:09.245652    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:06:09.260386    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:06:09.260396    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:06:09.285011    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:06:09.285023    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:06:09.298703    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:06:09.298715    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:06:09.313546    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:09.313558    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:09.402331    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:11.837946    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:14.403887    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:14.404046    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:14.415967    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:06:14.416044    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:14.427460    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:06:14.427539    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:14.438554    4700 logs.go:282] 2 containers: [7f1165bcc644 eb84c0e2fa42]
	I1011 15:06:14.438631    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:14.449988    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:06:14.450062    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:14.461176    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:06:14.461255    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:14.474901    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:06:14.474974    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:14.485929    4700 logs.go:282] 0 containers: []
	W1011 15:06:14.485941    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:14.486013    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:14.497363    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:06:14.497377    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:06:14.497383    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:06:14.516906    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:06:14.516916    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:06:14.532025    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:06:14.532035    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:06:14.549308    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:06:14.549320    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:06:14.561899    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:06:14.561911    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:06:14.580108    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:06:14.580119    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:06:14.591937    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:14.591947    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:14.616708    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:14.616717    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:14.652694    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:06:14.652705    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:14.665758    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:14.665769    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:14.670703    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:06:14.670710    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:06:14.686736    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:06:14.686745    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:06:14.701407    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:14.701418    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:06:14.718983    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:06:14.719076    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:06:14.735992    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:06:14.736000    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:06:14.736025    4700 out.go:270] X Problems detected in kubelet:
	W1011 15:06:14.736030    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:06:14.736042    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:06:14.736048    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:06:14.736052    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:06:16.840217    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:16.840444    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:16.862021    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:06:16.862127    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:16.885742    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:06:16.885820    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:16.897432    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:06:16.897513    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:16.908462    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:06:16.908539    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:16.919275    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:06:16.919350    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:16.933607    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:06:16.933684    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:16.943792    5145 logs.go:282] 0 containers: []
	W1011 15:06:16.943804    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:16.943866    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:16.954165    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:06:16.954184    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:16.954190    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:16.958551    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:06:16.958563    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:06:16.970713    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:16.970724    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:16.995159    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:06:16.995167    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:17.007184    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:06:17.007194    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:06:17.020964    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:06:17.020978    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:06:17.037997    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:06:17.038009    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:06:17.053182    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:06:17.053193    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:06:17.065352    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:06:17.065363    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:06:17.083110    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:17.083120    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:17.117749    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:06:17.117764    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:06:17.142823    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:17.142834    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:06:17.182530    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:06:17.182538    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:06:17.201266    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:06:17.201276    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:06:17.218891    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:06:17.218906    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:06:17.235774    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:06:17.235784    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:06:17.248824    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:06:17.248834    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:06:19.762236    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:24.764553    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:24.764762    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:24.791368    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:06:24.791492    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:24.808660    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:06:24.808755    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:24.822711    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:06:24.822791    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:24.834392    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:06:24.834476    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:24.844504    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:06:24.844574    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:24.854613    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:06:24.854684    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:24.865249    5145 logs.go:282] 0 containers: []
	W1011 15:06:24.865265    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:24.865341    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:24.875786    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:06:24.875804    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:06:24.875811    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:06:24.904500    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:06:24.904510    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:06:24.915765    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:06:24.915776    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:06:24.928692    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:06:24.928706    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:06:24.940264    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:24.940272    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:06:24.980601    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:24.980616    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:25.005433    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:06:25.005445    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:25.017440    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:25.017450    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:25.021599    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:06:25.021605    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:06:25.047630    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:06:25.047644    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:06:25.066232    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:06:25.066247    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:06:25.082162    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:06:25.082173    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:06:25.093828    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:06:25.093838    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:06:25.111666    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:06:25.111676    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:06:25.122848    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:25.122859    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:25.165061    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:06:25.165071    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:06:25.180617    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:06:25.180629    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:06:24.740033    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:27.695068    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:29.742541    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:29.742996    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:29.778011    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:06:29.778160    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:29.797678    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:06:29.797779    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:29.811730    4700 logs.go:282] 4 containers: [4fbadd8de248 5396e266a7e9 7f1165bcc644 eb84c0e2fa42]
	I1011 15:06:29.811818    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:29.823252    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:06:29.823337    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:29.833895    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:06:29.833962    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:29.844384    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:06:29.844464    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:29.854811    4700 logs.go:282] 0 containers: []
	W1011 15:06:29.854826    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:29.854883    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:29.865757    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:06:29.865775    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:06:29.865781    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:06:29.877949    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:06:29.877963    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:06:29.892461    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:06:29.892470    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:06:29.906116    4700 logs.go:123] Gathering logs for coredns [5396e266a7e9] ...
	I1011 15:06:29.906125    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5396e266a7e9"
	I1011 15:06:29.917275    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:29.917287    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:29.940715    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:06:29.940722    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:06:29.952698    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:06:29.952712    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:29.964113    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:29.964125    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:06:29.981987    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:06:29.982079    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:06:29.999529    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:29.999534    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:30.004089    4700 logs.go:123] Gathering logs for coredns [4fbadd8de248] ...
	I1011 15:06:30.004095    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fbadd8de248"
	I1011 15:06:30.018309    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:06:30.018320    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:06:30.038038    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:06:30.038049    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:06:30.049648    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:06:30.049658    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:06:30.066908    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:30.066923    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:30.103714    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:06:30.103727    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:06:30.125034    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:06:30.125044    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:06:30.125070    4700 out.go:270] X Problems detected in kubelet:
	W1011 15:06:30.125074    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:06:30.125079    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:06:30.125082    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:06:30.125085    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:06:32.697443    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:32.697717    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:32.721939    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:06:32.722058    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:32.737461    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:06:32.737555    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:32.751067    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:06:32.751152    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:32.761621    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:06:32.761699    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:32.773313    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:06:32.773391    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:32.783985    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:06:32.784058    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:32.794904    5145 logs.go:282] 0 containers: []
	W1011 15:06:32.794915    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:32.794981    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:32.806021    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:06:32.806037    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:06:32.806042    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:06:32.817754    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:06:32.817763    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:06:32.832479    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:06:32.832494    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:06:32.846071    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:06:32.846082    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:06:32.857466    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:06:32.857475    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:06:32.868706    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:32.868716    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:32.905703    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:06:32.905714    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:06:32.931141    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:06:32.931154    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:06:32.945469    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:06:32.945478    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:06:32.960448    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:06:32.960458    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:06:32.978584    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:06:32.978596    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:06:32.991041    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:32.991051    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:32.995744    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:06:32.995753    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:06:33.013364    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:06:33.013375    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:06:33.030164    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:33.030175    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:33.054463    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:06:33.054472    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:33.066971    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:33.066981    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:06:35.606489    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:40.129118    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:40.608805    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:40.608968    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:40.622930    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:06:40.623018    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:40.635597    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:06:40.635678    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:40.646272    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:06:40.646356    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:40.658236    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:06:40.658313    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:40.668465    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:06:40.668540    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:40.679224    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:06:40.679302    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:40.689807    5145 logs.go:282] 0 containers: []
	W1011 15:06:40.689818    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:40.689881    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:40.700412    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:06:40.700429    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:40.700435    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:40.735092    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:06:40.735106    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:06:40.747585    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:06:40.747598    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:06:40.761042    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:40.761052    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:40.784751    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:40.784760    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:06:40.823800    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:40.823808    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:40.828172    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:06:40.828180    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:06:40.852571    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:06:40.852581    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:06:40.866414    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:06:40.866424    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:06:40.877616    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:06:40.877631    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:06:40.889251    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:06:40.889265    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:06:40.906489    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:06:40.906498    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:06:40.920801    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:06:40.920811    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:40.932440    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:06:40.932454    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:06:40.946859    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:06:40.946870    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:06:40.961122    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:06:40.961132    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:06:40.977624    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:06:40.977634    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:06:43.491146    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:45.131570    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:45.131839    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:45.153431    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:06:45.153529    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:45.170454    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:06:45.170542    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:45.187082    4700 logs.go:282] 4 containers: [4fbadd8de248 5396e266a7e9 7f1165bcc644 eb84c0e2fa42]
	I1011 15:06:45.187166    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:45.199251    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:06:45.199326    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:45.210514    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:06:45.210590    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:45.221682    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:06:45.221772    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:45.233453    4700 logs.go:282] 0 containers: []
	W1011 15:06:45.233464    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:45.233528    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:45.243609    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:06:45.243632    4700 logs.go:123] Gathering logs for coredns [5396e266a7e9] ...
	I1011 15:06:45.243637    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5396e266a7e9"
	I1011 15:06:45.255086    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:45.255095    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:45.279886    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:06:45.279892    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:06:45.291961    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:06:45.291975    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:06:45.303946    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:06:45.303959    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:06:45.320252    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:06:45.320266    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:06:45.332261    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:06:45.332270    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:45.345650    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:45.345664    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:06:45.363921    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:06:45.364014    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:06:45.382054    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:06:45.382060    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:06:45.396180    4700 logs.go:123] Gathering logs for coredns [4fbadd8de248] ...
	I1011 15:06:45.396191    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fbadd8de248"
	I1011 15:06:45.407520    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:45.407530    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:45.412208    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:06:45.412215    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:06:45.429832    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:45.429845    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:45.468374    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:06:45.468386    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:06:45.484702    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:06:45.484713    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:06:45.500791    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:06:45.500801    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:06:45.500826    4700 out.go:270] X Problems detected in kubelet:
	W1011 15:06:45.500846    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:06:45.500852    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:06:45.500856    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:06:45.500862    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:06:48.493422    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:48.493724    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:48.528922    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:06:48.529021    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:48.546752    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:06:48.546833    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:48.560517    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:06:48.560590    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:48.573114    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:06:48.573197    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:48.583329    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:06:48.583404    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:48.594009    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:06:48.594081    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:48.604491    5145 logs.go:282] 0 containers: []
	W1011 15:06:48.604501    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:48.604564    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:48.618549    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:06:48.618566    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:48.618570    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:06:48.656021    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:06:48.656030    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:06:48.670337    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:06:48.670347    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:06:48.687442    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:06:48.687453    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:06:48.700510    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:06:48.700520    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:06:48.714500    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:06:48.714512    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:06:48.738960    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:06:48.738971    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:06:48.755741    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:48.755753    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:48.791637    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:06:48.791648    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:06:48.808339    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:06:48.808351    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:06:48.819554    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:48.819568    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:48.842683    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:06:48.842692    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:48.854929    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:48.854941    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:48.859603    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:06:48.859612    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:06:48.871218    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:06:48.871228    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:06:48.883711    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:06:48.883722    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:06:48.895278    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:06:48.895290    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:06:51.409134    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:55.503049    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:56.410260    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:56.410424    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:56.429088    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:06:56.429184    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:56.446918    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:06:56.447007    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:56.457968    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:06:56.458049    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:56.476891    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:06:56.476968    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:56.487835    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:06:56.487911    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:56.498843    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:06:56.498920    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:56.510163    5145 logs.go:282] 0 containers: []
	W1011 15:06:56.510175    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:56.510236    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:56.523335    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:06:56.523355    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:06:56.523359    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:06:56.544814    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:06:56.544823    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:06:56.556045    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:06:56.556058    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:06:56.568393    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:06:56.568407    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:06:56.580226    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:06:56.580239    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:06:56.594432    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:06:56.594441    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:06:56.608454    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:06:56.608468    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:56.620530    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:06:56.620543    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:06:56.635722    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:06:56.635734    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:06:56.650279    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:06:56.650287    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:06:56.668132    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:56.668146    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:56.702691    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:06:56.702700    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:06:56.728812    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:06:56.728822    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:06:56.740270    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:06:56.740284    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:06:56.751688    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:56.751699    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:56.775253    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:56.775262    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:06:56.814498    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:56.814506    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:59.321111    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:00.503472    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:00.503753    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:07:00.535098    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:07:00.535235    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:07:00.560085    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:07:00.560176    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:07:00.572343    4700 logs.go:282] 4 containers: [4fbadd8de248 5396e266a7e9 7f1165bcc644 eb84c0e2fa42]
	I1011 15:07:00.572422    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:07:00.583326    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:07:00.583409    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:07:00.598042    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:07:00.598133    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:07:00.636123    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:07:00.636202    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:07:00.647306    4700 logs.go:282] 0 containers: []
	W1011 15:07:00.647316    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:07:00.647382    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:07:00.657615    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:07:00.657634    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:07:00.657639    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:07:00.669589    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:07:00.669599    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:07:00.693914    4700 logs.go:123] Gathering logs for coredns [4fbadd8de248] ...
	I1011 15:07:00.693924    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fbadd8de248"
	I1011 15:07:00.706285    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:07:00.706297    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:07:00.741167    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:07:00.741179    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:07:00.755933    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:07:00.755942    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:07:00.773649    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:07:00.773743    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:07:00.790818    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:07:00.790824    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:07:00.804574    4700 logs.go:123] Gathering logs for coredns [5396e266a7e9] ...
	I1011 15:07:00.804585    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5396e266a7e9"
	I1011 15:07:00.816655    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:07:00.816666    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:07:00.828675    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:07:00.828685    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:07:00.840194    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:07:00.840206    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:07:00.855201    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:07:00.855211    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:07:00.866787    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:07:00.866799    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:07:00.871760    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:07:00.871766    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:07:00.883671    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:07:00.883684    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:07:00.902280    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:07:00.902290    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:07:00.902315    4700 out.go:270] X Problems detected in kubelet:
	W1011 15:07:00.902320    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:07:00.902323    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:07:00.902327    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:07:00.902331    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:07:04.321604    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:04.321925    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:07:04.353033    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:07:04.353165    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:07:04.372043    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:07:04.372143    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:07:04.386263    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:07:04.386351    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:07:04.398402    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:07:04.398478    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:07:04.409431    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:07:04.409507    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:07:04.420918    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:07:04.421000    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:07:04.431235    5145 logs.go:282] 0 containers: []
	W1011 15:07:04.431247    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:07:04.431316    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:07:04.442318    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:07:04.442336    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:07:04.442343    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:07:04.460155    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:07:04.460166    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:07:04.472824    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:07:04.472837    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:07:04.499961    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:07:04.499975    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:07:04.514709    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:07:04.514722    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:07:04.526967    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:07:04.526977    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:07:04.549797    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:07:04.549807    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:07:04.554482    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:07:04.554488    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:07:04.591779    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:07:04.591792    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:07:04.606093    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:07:04.606103    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:07:04.617842    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:07:04.617856    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:07:04.629125    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:07:04.629141    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:07:04.642092    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:07:04.642103    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:07:04.658080    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:07:04.658090    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:07:04.670670    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:07:04.670682    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:07:04.711128    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:07:04.711144    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:07:04.725092    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:07:04.725101    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:07:07.241257    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:10.906328    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:12.242801    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:12.243263    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:07:12.278922    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:07:12.279076    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:07:12.304746    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:07:12.304848    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:07:12.322109    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:07:12.322191    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:07:12.332719    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:07:12.332798    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:07:12.343724    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:07:12.343803    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:07:12.357366    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:07:12.357443    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:07:12.367589    5145 logs.go:282] 0 containers: []
	W1011 15:07:12.367599    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:07:12.367665    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:07:12.378210    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:07:12.378233    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:07:12.378239    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:07:12.389978    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:07:12.389988    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:07:12.403352    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:07:12.403361    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:07:12.415054    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:07:12.415064    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:07:12.419887    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:07:12.419893    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:07:12.453693    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:07:12.453706    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:07:12.468334    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:07:12.468344    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:07:12.479916    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:07:12.479926    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:07:12.491602    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:07:12.491611    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:07:12.519913    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:07:12.519926    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:07:12.534690    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:07:12.534700    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:07:12.549151    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:07:12.549161    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:07:12.562494    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:07:12.562504    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:07:12.579887    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:07:12.579896    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:07:12.597860    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:07:12.597871    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:07:12.637580    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:07:12.637594    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:07:12.655435    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:07:12.655447    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:07:15.187958    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:15.908612    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:15.908766    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:07:15.919819    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:07:15.919899    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:07:15.930632    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:07:15.930703    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:07:15.941682    4700 logs.go:282] 4 containers: [4fbadd8de248 5396e266a7e9 7f1165bcc644 eb84c0e2fa42]
	I1011 15:07:15.941764    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:07:15.952201    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:07:15.952266    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:07:15.962406    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:07:15.962479    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:07:15.973549    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:07:15.973624    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:07:15.983498    4700 logs.go:282] 0 containers: []
	W1011 15:07:15.983509    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:07:15.983574    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:07:15.994278    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:07:15.994297    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:07:15.994303    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:07:16.008592    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:07:16.008605    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:07:16.021119    4700 logs.go:123] Gathering logs for coredns [4fbadd8de248] ...
	I1011 15:07:16.021129    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fbadd8de248"
	I1011 15:07:16.033750    4700 logs.go:123] Gathering logs for coredns [5396e266a7e9] ...
	I1011 15:07:16.033762    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5396e266a7e9"
	I1011 15:07:16.045773    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:07:16.045785    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:07:16.068324    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:07:16.068337    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:07:16.080091    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:07:16.080104    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:07:16.099648    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:07:16.099742    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:07:20.190544    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:20.190885    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:07:20.206184    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:07:20.206286    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:07:20.219010    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:07:20.219090    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:07:20.230005    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:07:20.230082    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:07:20.240107    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:07:20.240177    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:07:20.260125    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:07:20.260195    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:07:20.270772    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:07:20.270843    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:07:20.281674    5145 logs.go:282] 0 containers: []
	W1011 15:07:20.281686    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:07:20.281753    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:07:20.292618    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:07:20.292638    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:07:20.292643    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:07:20.329478    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:07:20.329486    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:07:20.345220    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:07:20.345230    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:07:20.360775    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:07:20.360790    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:07:20.378876    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:07:20.378886    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:07:20.400652    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:07:20.400659    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:07:20.414361    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:07:20.414371    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:07:20.438659    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:07:20.438672    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:07:20.453616    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:07:20.453625    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:07:20.465642    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:07:20.465652    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:07:20.478364    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:07:20.478375    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:07:20.490363    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:07:20.490374    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:07:20.495164    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:07:20.495173    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:07:20.531478    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:07:20.531489    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:07:16.116895    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:07:16.116902    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:07:16.121313    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:07:16.121323    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:07:16.135300    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:07:16.135311    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:07:16.147089    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:07:16.147098    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:07:16.171702    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:07:16.171712    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:07:16.206066    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:07:16.206076    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:07:16.218842    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:07:16.218856    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:07:16.234128    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:07:16.234140    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:07:16.245697    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:07:16.245710    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:07:16.245739    4700 out.go:270] X Problems detected in kubelet:
	W1011 15:07:16.245744    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:07:16.245748    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:07:16.245751    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:07:16.245754    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:07:20.544115    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:07:20.544124    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:07:20.558656    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:07:20.558665    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:07:20.570424    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:07:20.570435    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:07:23.084150    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:28.086158    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:28.086207    5145 kubeadm.go:597] duration metric: took 4m3.760081333s to restartPrimaryControlPlane
	W1011 15:07:28.086255    5145 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 15:07:28.086271    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1011 15:07:29.125727    5145 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.0394575s)
	I1011 15:07:29.125831    5145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 15:07:29.130965    5145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 15:07:29.134071    5145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 15:07:29.136925    5145 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 15:07:29.136930    5145 kubeadm.go:157] found existing configuration files:
	
	I1011 15:07:29.136974    5145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/admin.conf
	I1011 15:07:29.139522    5145 kubeadm.go:163] "https://control-plane.minikube.internal:57470" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 15:07:29.139548    5145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 15:07:29.142181    5145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/kubelet.conf
	I1011 15:07:29.145539    5145 kubeadm.go:163] "https://control-plane.minikube.internal:57470" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 15:07:29.145566    5145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 15:07:29.148240    5145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/controller-manager.conf
	I1011 15:07:29.150971    5145 kubeadm.go:163] "https://control-plane.minikube.internal:57470" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 15:07:29.150998    5145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 15:07:29.154138    5145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/scheduler.conf
	I1011 15:07:29.156826    5145 kubeadm.go:163] "https://control-plane.minikube.internal:57470" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 15:07:29.156851    5145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 15:07:29.159369    5145 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 15:07:29.176417    5145 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1011 15:07:29.176457    5145 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 15:07:29.234643    5145 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 15:07:29.234736    5145 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 15:07:29.234787    5145 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 15:07:29.283682    5145 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 15:07:29.287872    5145 out.go:235]   - Generating certificates and keys ...
	I1011 15:07:29.287904    5145 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 15:07:29.287933    5145 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 15:07:29.287973    5145 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 15:07:29.288029    5145 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 15:07:29.288065    5145 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 15:07:29.288088    5145 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 15:07:29.288120    5145 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 15:07:29.288149    5145 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 15:07:29.288199    5145 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 15:07:29.288235    5145 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 15:07:29.288254    5145 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 15:07:29.288284    5145 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 15:07:29.326627    5145 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 15:07:29.510878    5145 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 15:07:29.548553    5145 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 15:07:29.599617    5145 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 15:07:29.627090    5145 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 15:07:29.627481    5145 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 15:07:29.627547    5145 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 15:07:29.715380    5145 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 15:07:29.719561    5145 out.go:235]   - Booting up control plane ...
	I1011 15:07:29.719601    5145 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 15:07:29.719639    5145 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 15:07:29.719690    5145 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 15:07:29.719737    5145 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 15:07:29.719819    5145 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 15:07:26.249520    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:34.220149    5145 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501934 seconds
	I1011 15:07:34.220240    5145 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 15:07:34.224322    5145 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 15:07:34.733416    5145 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 15:07:34.733526    5145 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-583000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 15:07:35.237508    5145 kubeadm.go:310] [bootstrap-token] Using token: q96muf.2a0odtdr2nd5iza9
	I1011 15:07:35.242650    5145 out.go:235]   - Configuring RBAC rules ...
	I1011 15:07:35.242706    5145 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 15:07:35.242753    5145 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 15:07:35.249417    5145 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 15:07:35.250444    5145 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 15:07:35.251534    5145 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 15:07:35.252578    5145 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 15:07:35.256587    5145 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 15:07:35.441660    5145 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 15:07:35.641848    5145 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 15:07:35.642414    5145 kubeadm.go:310] 
	I1011 15:07:35.642450    5145 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 15:07:35.642455    5145 kubeadm.go:310] 
	I1011 15:07:35.642492    5145 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 15:07:35.642496    5145 kubeadm.go:310] 
	I1011 15:07:35.642508    5145 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 15:07:35.642558    5145 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 15:07:35.642621    5145 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 15:07:35.642626    5145 kubeadm.go:310] 
	I1011 15:07:35.642685    5145 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 15:07:35.642691    5145 kubeadm.go:310] 
	I1011 15:07:35.642723    5145 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 15:07:35.642727    5145 kubeadm.go:310] 
	I1011 15:07:35.642752    5145 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 15:07:35.642809    5145 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 15:07:35.642858    5145 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 15:07:35.642862    5145 kubeadm.go:310] 
	I1011 15:07:35.642921    5145 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 15:07:35.642978    5145 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 15:07:35.642987    5145 kubeadm.go:310] 
	I1011 15:07:35.643026    5145 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token q96muf.2a0odtdr2nd5iza9 \
	I1011 15:07:35.643144    5145 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ff7372af64c3996e800eaf522c3eb51c544993254bf1d45ae249aa6259e8117f \
	I1011 15:07:35.643156    5145 kubeadm.go:310] 	--control-plane 
	I1011 15:07:35.643158    5145 kubeadm.go:310] 
	I1011 15:07:35.643254    5145 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 15:07:35.643261    5145 kubeadm.go:310] 
	I1011 15:07:35.643331    5145 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token q96muf.2a0odtdr2nd5iza9 \
	I1011 15:07:35.643396    5145 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ff7372af64c3996e800eaf522c3eb51c544993254bf1d45ae249aa6259e8117f 
	I1011 15:07:35.643527    5145 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 15:07:35.643604    5145 cni.go:84] Creating CNI manager for ""
	I1011 15:07:35.643613    5145 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 15:07:35.647576    5145 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 15:07:35.655554    5145 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 15:07:35.658528    5145 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 15:07:35.663390    5145 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 15:07:35.663445    5145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 15:07:35.663483    5145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-583000 minikube.k8s.io/updated_at=2024_10_11T15_07_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=stopped-upgrade-583000 minikube.k8s.io/primary=true
	I1011 15:07:35.706893    5145 ops.go:34] apiserver oom_adj: -16
	I1011 15:07:35.706963    5145 kubeadm.go:1113] duration metric: took 43.562667ms to wait for elevateKubeSystemPrivileges
	I1011 15:07:35.706973    5145 kubeadm.go:394] duration metric: took 4m11.394537792s to StartCluster
	I1011 15:07:35.706982    5145 settings.go:142] acquiring lock: {Name:mka75dc1604295e2b491b48ad476a4c06f6cece7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:07:35.707080    5145 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:07:35.707525    5145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/kubeconfig: {Name:mkc848521291f94f61a80272f8eb43a8779805e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:07:35.707749    5145 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:07:35.707765    5145 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 15:07:35.707799    5145 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-583000"
	I1011 15:07:35.707823    5145 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-583000"
	I1011 15:07:35.707829    5145 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-583000"
	I1011 15:07:35.707848    5145 config.go:182] Loaded profile config "stopped-upgrade-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 15:07:35.707858    5145 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-583000"
	W1011 15:07:35.707863    5145 addons.go:243] addon storage-provisioner should already be in state true
	I1011 15:07:35.707878    5145 host.go:66] Checking if "stopped-upgrade-583000" exists ...
	I1011 15:07:35.709012    5145 kapi.go:59] client config for stopped-upgrade-583000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/client.key", CAFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101f7ee40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1011 15:07:35.709139    5145 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-583000"
	W1011 15:07:35.709144    5145 addons.go:243] addon default-storageclass should already be in state true
	I1011 15:07:35.709156    5145 host.go:66] Checking if "stopped-upgrade-583000" exists ...
	I1011 15:07:35.710537    5145 out.go:177] * Verifying Kubernetes components...
	I1011 15:07:35.710840    5145 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 15:07:35.714651    5145 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 15:07:35.714659    5145 sshutil.go:53] new ssh client: &{IP:localhost Port:57437 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/id_rsa Username:docker}
	I1011 15:07:35.718490    5145 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 15:07:31.251724    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:31.251828    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:07:31.263517    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:07:31.263618    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:07:31.274437    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:07:31.274519    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:07:31.285460    4700 logs.go:282] 4 containers: [4fbadd8de248 5396e266a7e9 7f1165bcc644 eb84c0e2fa42]
	I1011 15:07:31.285543    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:07:31.296933    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:07:31.297016    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:07:31.308060    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:07:31.308139    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:07:31.321013    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:07:31.321090    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:07:31.331837    4700 logs.go:282] 0 containers: []
	W1011 15:07:31.331849    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:07:31.331911    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:07:31.343743    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:07:31.343760    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:07:31.343765    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:07:31.348331    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:07:31.348338    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:07:31.368504    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:07:31.368598    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:07:31.386629    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:07:31.386637    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:07:31.404725    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:07:31.404741    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:07:31.429953    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:07:31.429973    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:07:31.442726    4700 logs.go:123] Gathering logs for coredns [5396e266a7e9] ...
	I1011 15:07:31.442739    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5396e266a7e9"
	I1011 15:07:31.456596    4700 logs.go:123] Gathering logs for coredns [4fbadd8de248] ...
	I1011 15:07:31.456608    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fbadd8de248"
	I1011 15:07:31.470018    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:07:31.470030    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:07:31.486134    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:07:31.486148    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:07:31.523176    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:07:31.523188    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:07:31.537927    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:07:31.537941    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:07:31.552373    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:07:31.552385    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:07:31.564888    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:07:31.564900    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:07:31.578096    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:07:31.578107    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:07:31.591320    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:07:31.591331    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:07:31.606723    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:07:31.606734    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:07:31.606762    4700 out.go:270] X Problems detected in kubelet:
	W1011 15:07:31.606769    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:07:31.606772    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:07:31.606776    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:07:31.606779    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:07:35.721614    5145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 15:07:35.724609    5145 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 15:07:35.724615    5145 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 15:07:35.724621    5145 sshutil.go:53] new ssh client: &{IP:localhost Port:57437 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/id_rsa Username:docker}
	I1011 15:07:35.805557    5145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 15:07:35.811148    5145 api_server.go:52] waiting for apiserver process to appear ...
	I1011 15:07:35.811212    5145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 15:07:35.815265    5145 api_server.go:72] duration metric: took 107.503709ms to wait for apiserver process to appear ...
	I1011 15:07:35.815274    5145 api_server.go:88] waiting for apiserver healthz status ...
	I1011 15:07:35.815281    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:35.861837    5145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 15:07:35.904350    5145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 15:07:36.209287    5145 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1011 15:07:36.209298    5145 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1011 15:07:40.817273    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:40.817294    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:41.610744    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:45.817649    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:45.817684    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:46.612920    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:46.613025    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:07:46.624023    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:07:46.624102    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:07:46.634315    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:07:46.634386    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:07:46.645303    4700 logs.go:282] 4 containers: [4fbadd8de248 5396e266a7e9 7f1165bcc644 eb84c0e2fa42]
	I1011 15:07:46.645393    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:07:46.656046    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:07:46.656127    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:07:46.672268    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:07:46.672341    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:07:46.682630    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:07:46.682704    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:07:46.695381    4700 logs.go:282] 0 containers: []
	W1011 15:07:46.695395    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:07:46.695472    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:07:46.705762    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:07:46.705780    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:07:46.705784    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:07:46.720342    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:07:46.720353    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:07:46.736091    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:07:46.736101    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:07:46.748473    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:07:46.748482    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:07:46.787578    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:07:46.787589    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:07:46.799901    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:07:46.799915    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:07:46.824887    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:07:46.824895    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:07:46.838749    4700 logs.go:123] Gathering logs for coredns [4fbadd8de248] ...
	I1011 15:07:46.838760    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fbadd8de248"
	I1011 15:07:46.850720    4700 logs.go:123] Gathering logs for coredns [5396e266a7e9] ...
	I1011 15:07:46.850731    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5396e266a7e9"
	I1011 15:07:46.863988    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:07:46.864002    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:07:46.878314    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:07:46.878326    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:07:46.895734    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:07:46.895746    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:07:46.914427    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:07:46.914522    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:07:46.932115    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:07:46.932122    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:07:46.944216    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:07:46.944227    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:07:46.956235    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:07:46.956246    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:07:46.961205    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:07:46.961216    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:07:46.961241    4700 out.go:270] X Problems detected in kubelet:
	W1011 15:07:46.961245    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:07:46.961249    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:07:46.961252    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:07:46.961255    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:07:50.817996    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:50.818020    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:55.818422    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:55.818471    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:56.963601    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:08:00.819135    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:00.819161    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:08:01.965746    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:01.965849    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:08:01.992473    4700 logs.go:282] 1 containers: [6a1874a90592]
	I1011 15:08:01.992567    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:08:02.014494    4700 logs.go:282] 1 containers: [c84b1906f7fd]
	I1011 15:08:02.014576    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:08:02.025598    4700 logs.go:282] 4 containers: [4fbadd8de248 5396e266a7e9 7f1165bcc644 eb84c0e2fa42]
	I1011 15:08:02.025685    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:08:02.036284    4700 logs.go:282] 1 containers: [b649cd1f1ae2]
	I1011 15:08:02.036361    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:08:02.046501    4700 logs.go:282] 1 containers: [573b330f3507]
	I1011 15:08:02.046587    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:08:02.057773    4700 logs.go:282] 1 containers: [6d49685ed855]
	I1011 15:08:02.057843    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:08:02.068418    4700 logs.go:282] 0 containers: []
	W1011 15:08:02.068428    4700 logs.go:284] No container was found matching "kindnet"
	I1011 15:08:02.068490    4700 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:08:02.079380    4700 logs.go:282] 1 containers: [bbaa751bccbf]
	I1011 15:08:02.079397    4700 logs.go:123] Gathering logs for kubelet ...
	I1011 15:08:02.079404    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 15:08:02.098555    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:08:02.098649    4700 logs.go:138] Found kubelet problem: Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:08:02.115756    4700 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:08:02.115761    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:08:02.150925    4700 logs.go:123] Gathering logs for coredns [4fbadd8de248] ...
	I1011 15:08:02.150936    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fbadd8de248"
	I1011 15:08:02.162643    4700 logs.go:123] Gathering logs for coredns [7f1165bcc644] ...
	I1011 15:08:02.162655    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f1165bcc644"
	I1011 15:08:02.174386    4700 logs.go:123] Gathering logs for kube-proxy [573b330f3507] ...
	I1011 15:08:02.174397    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 573b330f3507"
	I1011 15:08:02.187120    4700 logs.go:123] Gathering logs for container status ...
	I1011 15:08:02.187131    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:08:02.199055    4700 logs.go:123] Gathering logs for etcd [c84b1906f7fd] ...
	I1011 15:08:02.199069    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c84b1906f7fd"
	I1011 15:08:02.213329    4700 logs.go:123] Gathering logs for coredns [eb84c0e2fa42] ...
	I1011 15:08:02.213342    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb84c0e2fa42"
	I1011 15:08:02.225068    4700 logs.go:123] Gathering logs for coredns [5396e266a7e9] ...
	I1011 15:08:02.225078    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5396e266a7e9"
	I1011 15:08:02.243921    4700 logs.go:123] Gathering logs for kube-controller-manager [6d49685ed855] ...
	I1011 15:08:02.243931    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d49685ed855"
	I1011 15:08:02.261622    4700 logs.go:123] Gathering logs for storage-provisioner [bbaa751bccbf] ...
	I1011 15:08:02.261633    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbaa751bccbf"
	I1011 15:08:02.273478    4700 logs.go:123] Gathering logs for dmesg ...
	I1011 15:08:02.273489    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:08:02.278619    4700 logs.go:123] Gathering logs for kube-apiserver [6a1874a90592] ...
	I1011 15:08:02.278625    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1874a90592"
	I1011 15:08:02.292844    4700 logs.go:123] Gathering logs for kube-scheduler [b649cd1f1ae2] ...
	I1011 15:08:02.292853    4700 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b649cd1f1ae2"
	I1011 15:08:02.308001    4700 logs.go:123] Gathering logs for Docker ...
	I1011 15:08:02.308011    4700 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:08:02.332470    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:08:02.332479    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 15:08:02.332505    4700 out.go:270] X Problems detected in kubelet:
	W1011 15:08:02.332510    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: W1011 22:00:15.144984    4162 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	W1011 15:08:02.332513    4700 out.go:270]   Oct 11 22:00:15 running-upgrade-130000 kubelet[4162]: E1011 22:00:15.145002    4162 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-130000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-130000' and this object
	I1011 15:08:02.332516    4700 out.go:358] Setting ErrFile to fd 2...
	I1011 15:08:02.332519    4700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:08:05.819882    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:05.819911    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1011 15:08:06.211180    5145 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1011 15:08:06.215414    5145 out.go:177] * Enabled addons: storage-provisioner
	I1011 15:08:06.223399    5145 addons.go:510] duration metric: took 30.51612475s for enable addons: enabled=[storage-provisioner]
	I1011 15:08:10.820887    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:10.820926    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:08:12.336499    4700 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:08:17.338714    4700 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:17.342501    4700 out.go:201] 
	W1011 15:08:17.347341    4700 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1011 15:08:17.347358    4700 out.go:270] * 
	W1011 15:08:17.348332    4700 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:08:17.358328    4700 out.go:201] 
	I1011 15:08:15.822211    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:15.822254    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:08:20.824017    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:20.824100    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:08:25.826460    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:25.826500    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Fri 2024-10-11 21:59:19 UTC, ends at Fri 2024-10-11 22:08:33 UTC. --
	Oct 11 22:08:14 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:14Z" level=error msg="ContainerStats resp: {0x40005ed580 linux}"
	Oct 11 22:08:14 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:14Z" level=error msg="ContainerStats resp: {<nil> }"
	Oct 11 22:08:14 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:14Z" level=error msg="Error response from daemon: No such container: 7f1165bcc644a7b67a6c1f6bce07ca519b9dab89e6658749726abc5996f0bbf3 Failed to get stats from container 7f1165bcc644a7b67a6c1f6bce07ca519b9dab89e6658749726abc5996f0bbf3"
	Oct 11 22:08:15 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:15Z" level=error msg="ContainerStats resp: {0x4000a9ce80 linux}"
	Oct 11 22:08:16 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:16Z" level=error msg="ContainerStats resp: {0x400060ec40 linux}"
	Oct 11 22:08:16 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:16Z" level=error msg="ContainerStats resp: {0x400060f080 linux}"
	Oct 11 22:08:16 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:16Z" level=error msg="ContainerStats resp: {0x400060f740 linux}"
	Oct 11 22:08:16 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:16Z" level=error msg="ContainerStats resp: {0x400093e540 linux}"
	Oct 11 22:08:16 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:16Z" level=error msg="ContainerStats resp: {0x40000b94c0 linux}"
	Oct 11 22:08:16 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:16Z" level=error msg="ContainerStats resp: {0x400093efc0 linux}"
	Oct 11 22:08:16 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:16Z" level=error msg="ContainerStats resp: {0x400093f3c0 linux}"
	Oct 11 22:08:16 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:16Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 11 22:08:21 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:21Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 11 22:08:26 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:26Z" level=error msg="ContainerStats resp: {0x40008510c0 linux}"
	Oct 11 22:08:26 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:26Z" level=error msg="ContainerStats resp: {0x400091bc80 linux}"
	Oct 11 22:08:26 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:26Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 11 22:08:27 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:27Z" level=error msg="ContainerStats resp: {0x400060f380 linux}"
	Oct 11 22:08:28 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:28Z" level=error msg="ContainerStats resp: {0x400007e500 linux}"
	Oct 11 22:08:28 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:28Z" level=error msg="ContainerStats resp: {0x400007eb40 linux}"
	Oct 11 22:08:28 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:28Z" level=error msg="ContainerStats resp: {0x4000358dc0 linux}"
	Oct 11 22:08:28 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:28Z" level=error msg="ContainerStats resp: {0x4000359200 linux}"
	Oct 11 22:08:28 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:28Z" level=error msg="ContainerStats resp: {0x4000358440 linux}"
	Oct 11 22:08:28 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:28Z" level=error msg="ContainerStats resp: {0x400007e680 linux}"
	Oct 11 22:08:28 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:28Z" level=error msg="ContainerStats resp: {0x4000358d00 linux}"
	Oct 11 22:08:31 running-upgrade-130000 cri-dockerd[2748]: time="2024-10-11T22:08:31Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	674dd1d475189       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   98c85f152eb37
	188fef60adb81       edaa71f2aee88       20 seconds ago      Running             coredns                   2                   bb2ce80f4309d
	4fbadd8de2488       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   bb2ce80f4309d
	5396e266a7e9d       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   98c85f152eb37
	bbaa751bccbf8       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   b86e0c2348ab3
	573b330f35075       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   ad10bacbbea68
	b649cd1f1ae2d       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   29980686c2b2b
	c84b1906f7fd9       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   81ccadbf0af0a
	6d49685ed8551       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   534251c0bb7f1
	6a1874a905927       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   484548e5b61ce
	
	
	==> coredns [188fef60adb8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2868632212595680746.7939744859433265324. HINFO: read udp 10.244.0.3:45468->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2868632212595680746.7939744859433265324. HINFO: read udp 10.244.0.3:39543->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2868632212595680746.7939744859433265324. HINFO: read udp 10.244.0.3:41228->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2868632212595680746.7939744859433265324. HINFO: read udp 10.244.0.3:45160->10.0.2.3:53: i/o timeout
	
	
	==> coredns [4fbadd8de248] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 246915404630497503.3139242757794673480. HINFO: read udp 10.244.0.3:44110->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 246915404630497503.3139242757794673480. HINFO: read udp 10.244.0.3:46720->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 246915404630497503.3139242757794673480. HINFO: read udp 10.244.0.3:52343->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 246915404630497503.3139242757794673480. HINFO: read udp 10.244.0.3:35855->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 246915404630497503.3139242757794673480. HINFO: read udp 10.244.0.3:46865->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 246915404630497503.3139242757794673480. HINFO: read udp 10.244.0.3:49989->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 246915404630497503.3139242757794673480. HINFO: read udp 10.244.0.3:41256->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 246915404630497503.3139242757794673480. HINFO: read udp 10.244.0.3:60720->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 246915404630497503.3139242757794673480. HINFO: read udp 10.244.0.3:57027->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 246915404630497503.3139242757794673480. HINFO: read udp 10.244.0.3:49128->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5396e266a7e9] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7108170601108996226.8354182389976201901. HINFO: read udp 10.244.0.2:49031->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7108170601108996226.8354182389976201901. HINFO: read udp 10.244.0.2:49441->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7108170601108996226.8354182389976201901. HINFO: read udp 10.244.0.2:37740->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7108170601108996226.8354182389976201901. HINFO: read udp 10.244.0.2:35194->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7108170601108996226.8354182389976201901. HINFO: read udp 10.244.0.2:47190->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7108170601108996226.8354182389976201901. HINFO: read udp 10.244.0.2:58217->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7108170601108996226.8354182389976201901. HINFO: read udp 10.244.0.2:46297->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7108170601108996226.8354182389976201901. HINFO: read udp 10.244.0.2:50821->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7108170601108996226.8354182389976201901. HINFO: read udp 10.244.0.2:36606->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7108170601108996226.8354182389976201901. HINFO: read udp 10.244.0.2:56308->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [674dd1d47518] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2350192147011441378.3942556271095071917. HINFO: read udp 10.244.0.2:46685->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2350192147011441378.3942556271095071917. HINFO: read udp 10.244.0.2:49282->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2350192147011441378.3942556271095071917. HINFO: read udp 10.244.0.2:43323->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2350192147011441378.3942556271095071917. HINFO: read udp 10.244.0.2:54910->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-130000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-130000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=running-upgrade-130000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_11T15_04_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 22:04:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-130000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 22:08:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 22:04:12 +0000   Fri, 11 Oct 2024 22:04:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 22:04:12 +0000   Fri, 11 Oct 2024 22:04:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 22:04:12 +0000   Fri, 11 Oct 2024 22:04:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 22:04:12 +0000   Fri, 11 Oct 2024 22:04:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-130000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 a9a3ea9079a74ee5acd2aa8339c5bfaf
	  System UUID:                a9a3ea9079a74ee5acd2aa8339c5bfaf
	  Boot ID:                    c5af1b6a-ca6d-4cb7-91ab-4153649dbb72
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-pxz4n                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m8s
	  kube-system                 coredns-6d4b75cb6d-z6msc                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m8s
	  kube-system                 etcd-running-upgrade-130000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m20s
	  kube-system                 kube-apiserver-running-upgrade-130000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-controller-manager-running-upgrade-130000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-proxy-zlqf6                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-running-upgrade-130000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m26s (x5 over 4m26s)  kubelet          Node running-upgrade-130000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x5 over 4m26s)  kubelet          Node running-upgrade-130000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x4 over 4m26s)  kubelet          Node running-upgrade-130000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s                  kubelet          Node running-upgrade-130000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s                  kubelet          Node running-upgrade-130000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s                  kubelet          Node running-upgrade-130000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m21s                  kubelet          Node running-upgrade-130000 status is now: NodeReady
	  Normal  Starting                 4m21s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m9s                   node-controller  Node running-upgrade-130000 event: Registered Node running-upgrade-130000 in Controller
	
	
	==> dmesg <==
	[  +1.748898] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.080814] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +0.080157] systemd-fstab-generator[908]: Ignoring "noauto" for root device
	[  +1.137681] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.087969] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +0.087564] systemd-fstab-generator[1069]: Ignoring "noauto" for root device
	[  +2.010177] systemd-fstab-generator[1299]: Ignoring "noauto" for root device
	[  +8.641087] systemd-fstab-generator[1960]: Ignoring "noauto" for root device
	[  +2.954924] systemd-fstab-generator[2231]: Ignoring "noauto" for root device
	[  +0.148262] systemd-fstab-generator[2266]: Ignoring "noauto" for root device
	[  +0.086148] systemd-fstab-generator[2277]: Ignoring "noauto" for root device
	[  +0.098135] systemd-fstab-generator[2290]: Ignoring "noauto" for root device
	[  +2.145957] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.135607] systemd-fstab-generator[2705]: Ignoring "noauto" for root device
	[  +0.087236] systemd-fstab-generator[2716]: Ignoring "noauto" for root device
	[  +0.077568] systemd-fstab-generator[2727]: Ignoring "noauto" for root device
	[  +0.090998] systemd-fstab-generator[2741]: Ignoring "noauto" for root device
	[  +2.334141] systemd-fstab-generator[2894]: Ignoring "noauto" for root device
	[  +3.670363] systemd-fstab-generator[3455]: Ignoring "noauto" for root device
	[  +2.613741] systemd-fstab-generator[4156]: Ignoring "noauto" for root device
	[Oct11 22:00] kauditd_printk_skb: 68 callbacks suppressed
	[Oct11 22:04] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.344000] systemd-fstab-generator[10026]: Ignoring "noauto" for root device
	[  +5.635487] systemd-fstab-generator[10618]: Ignoring "noauto" for root device
	[  +0.470449] systemd-fstab-generator[10748]: Ignoring "noauto" for root device
	
	
	==> etcd [c84b1906f7fd] <==
	{"level":"info","ts":"2024-10-11T22:04:08.264Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-11T22:04:08.264Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-11T22:04:08.264Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-10-11T22:04:08.264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-10-11T22:04:08.264Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-10-11T22:04:08.264Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-11T22:04:08.264Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-11T22:04:08.431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-11T22:04:08.431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-11T22:04:08.431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-10-11T22:04:08.431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-10-11T22:04:08.431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-11T22:04:08.431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-10-11T22:04:08.431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-11T22:04:08.431Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-130000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-11T22:04:08.431Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T22:04:08.431Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:04:08.431Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T22:04:08.431Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-11T22:04:08.432Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-11T22:04:08.432Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:04:08.432Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:04:08.439Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T22:04:08.439Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-11T22:04:08.439Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 22:08:33 up 9 min,  0 users,  load average: 0.66, 0.47, 0.25
	Linux running-upgrade-130000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [6a1874a90592] <==
	I1011 22:04:09.970853       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1011 22:04:09.977353       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1011 22:04:10.000001       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1011 22:04:10.000036       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1011 22:04:10.000042       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1011 22:04:10.000045       1 cache.go:39] Caches are synced for autoregister controller
	I1011 22:04:10.015376       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1011 22:04:10.710406       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1011 22:04:10.884099       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1011 22:04:10.888631       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1011 22:04:10.888739       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1011 22:04:11.037092       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1011 22:04:11.048309       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1011 22:04:11.139287       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1011 22:04:11.141444       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1011 22:04:11.141887       1 controller.go:611] quota admission added evaluator for: endpoints
	I1011 22:04:11.143331       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1011 22:04:12.013738       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1011 22:04:12.672639       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1011 22:04:12.676211       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1011 22:04:12.680823       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1011 22:04:12.729588       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1011 22:04:25.318653       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1011 22:04:25.568154       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1011 22:04:26.657985       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [6d49685ed855] <==
	I1011 22:04:24.845706       1 event.go:294] "Event occurred" object="running-upgrade-130000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-130000 event: Registered Node running-upgrade-130000 in Controller"
	I1011 22:04:24.847443       1 shared_informer.go:262] Caches are synced for TTL after finished
	I1011 22:04:24.847455       1 shared_informer.go:262] Caches are synced for HPA
	I1011 22:04:24.851838       1 shared_informer.go:262] Caches are synced for PV protection
	I1011 22:04:24.864342       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1011 22:04:24.864361       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1011 22:04:24.864394       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1011 22:04:24.865025       1 shared_informer.go:262] Caches are synced for expand
	I1011 22:04:24.866741       1 shared_informer.go:262] Caches are synced for PVC protection
	I1011 22:04:24.866754       1 shared_informer.go:262] Caches are synced for stateful set
	I1011 22:04:24.866763       1 shared_informer.go:262] Caches are synced for service account
	I1011 22:04:24.867982       1 shared_informer.go:262] Caches are synced for attach detach
	I1011 22:04:24.950359       1 shared_informer.go:262] Caches are synced for disruption
	I1011 22:04:24.950368       1 disruption.go:371] Sending events to api server.
	I1011 22:04:25.041537       1 shared_informer.go:262] Caches are synced for resource quota
	I1011 22:04:25.064395       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1011 22:04:25.067673       1 shared_informer.go:262] Caches are synced for crt configmap
	I1011 22:04:25.072811       1 shared_informer.go:262] Caches are synced for resource quota
	I1011 22:04:25.321784       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zlqf6"
	I1011 22:04:25.482248       1 shared_informer.go:262] Caches are synced for garbage collector
	I1011 22:04:25.513581       1 shared_informer.go:262] Caches are synced for garbage collector
	I1011 22:04:25.513590       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1011 22:04:25.569076       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1011 22:04:25.869562       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-z6msc"
	I1011 22:04:25.874301       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-pxz4n"
	
	
	==> kube-proxy [573b330f3507] <==
	I1011 22:04:26.598912       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1011 22:04:26.598945       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1011 22:04:26.598956       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1011 22:04:26.651588       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1011 22:04:26.651597       1 server_others.go:206] "Using iptables Proxier"
	I1011 22:04:26.651617       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1011 22:04:26.651728       1 server.go:661] "Version info" version="v1.24.1"
	I1011 22:04:26.651732       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 22:04:26.652133       1 config.go:317] "Starting service config controller"
	I1011 22:04:26.652154       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1011 22:04:26.652166       1 config.go:226] "Starting endpoint slice config controller"
	I1011 22:04:26.652168       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1011 22:04:26.655241       1 config.go:444] "Starting node config controller"
	I1011 22:04:26.655250       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1011 22:04:26.752551       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1011 22:04:26.752574       1 shared_informer.go:262] Caches are synced for service config
	I1011 22:04:26.755746       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [b649cd1f1ae2] <==
	W1011 22:04:09.938428       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1011 22:04:09.938435       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1011 22:04:09.938447       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1011 22:04:09.938450       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1011 22:04:09.938471       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1011 22:04:09.938478       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1011 22:04:09.938430       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1011 22:04:09.938497       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1011 22:04:09.938532       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1011 22:04:09.938535       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1011 22:04:10.765323       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1011 22:04:10.765728       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1011 22:04:10.806470       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1011 22:04:10.806663       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1011 22:04:10.837415       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1011 22:04:10.837732       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1011 22:04:10.901363       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1011 22:04:10.901565       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1011 22:04:10.917256       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1011 22:04:10.917439       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1011 22:04:10.956778       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1011 22:04:10.956877       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1011 22:04:10.990389       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1011 22:04:10.990483       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1011 22:04:13.536470       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Fri 2024-10-11 21:59:19 UTC, ends at Fri 2024-10-11 22:08:33 UTC. --
	Oct 11 22:04:24 running-upgrade-130000 kubelet[10624]: I1011 22:04:24.852214   10624 topology_manager.go:200] "Topology Admit Handler"
	Oct 11 22:04:25 running-upgrade-130000 kubelet[10624]: I1011 22:04:25.030100   10624 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b5f81602-9116-491d-99c8-750e34833f77-tmp\") pod \"storage-provisioner\" (UID: \"b5f81602-9116-491d-99c8-750e34833f77\") " pod="kube-system/storage-provisioner"
	Oct 11 22:04:25 running-upgrade-130000 kubelet[10624]: I1011 22:04:25.030137   10624 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prggj\" (UniqueName: \"kubernetes.io/projected/b5f81602-9116-491d-99c8-750e34833f77-kube-api-access-prggj\") pod \"storage-provisioner\" (UID: \"b5f81602-9116-491d-99c8-750e34833f77\") " pod="kube-system/storage-provisioner"
	Oct 11 22:04:25 running-upgrade-130000 kubelet[10624]: E1011 22:04:25.135094   10624 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 11 22:04:25 running-upgrade-130000 kubelet[10624]: E1011 22:04:25.135110   10624 projected.go:192] Error preparing data for projected volume kube-api-access-prggj for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Oct 11 22:04:25 running-upgrade-130000 kubelet[10624]: E1011 22:04:25.135140   10624 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/b5f81602-9116-491d-99c8-750e34833f77-kube-api-access-prggj podName:b5f81602-9116-491d-99c8-750e34833f77 nodeName:}" failed. No retries permitted until 2024-10-11 22:04:25.635128818 +0000 UTC m=+12.971340725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-prggj" (UniqueName: "kubernetes.io/projected/b5f81602-9116-491d-99c8-750e34833f77-kube-api-access-prggj") pod "storage-provisioner" (UID: "b5f81602-9116-491d-99c8-750e34833f77") : configmap "kube-root-ca.crt" not found
	Oct 11 22:04:25 running-upgrade-130000 kubelet[10624]: I1011 22:04:25.324551   10624 topology_manager.go:200] "Topology Admit Handler"
	Oct 11 22:04:25 running-upgrade-130000 kubelet[10624]: I1011 22:04:25.433437   10624 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f84078f5-56fa-4e83-82bc-dd9124714122-kube-proxy\") pod \"kube-proxy-zlqf6\" (UID: \"f84078f5-56fa-4e83-82bc-dd9124714122\") " pod="kube-system/kube-proxy-zlqf6"
	Oct 11 22:04:25 running-upgrade-130000 kubelet[10624]: I1011 22:04:25.433568   10624 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f84078f5-56fa-4e83-82bc-dd9124714122-lib-modules\") pod \"kube-proxy-zlqf6\" (UID: \"f84078f5-56fa-4e83-82bc-dd9124714122\") " pod="kube-system/kube-proxy-zlqf6"
	Oct 11 22:04:25 running-upgrade-130000 kubelet[10624]: I1011 22:04:25.433597   10624 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f84078f5-56fa-4e83-82bc-dd9124714122-xtables-lock\") pod \"kube-proxy-zlqf6\" (UID: \"f84078f5-56fa-4e83-82bc-dd9124714122\") " pod="kube-system/kube-proxy-zlqf6"
	Oct 11 22:04:25 running-upgrade-130000 kubelet[10624]: I1011 22:04:25.433611   10624 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdg4z\" (UniqueName: \"kubernetes.io/projected/f84078f5-56fa-4e83-82bc-dd9124714122-kube-api-access-xdg4z\") pod \"kube-proxy-zlqf6\" (UID: \"f84078f5-56fa-4e83-82bc-dd9124714122\") " pod="kube-system/kube-proxy-zlqf6"
	Oct 11 22:04:25 running-upgrade-130000 kubelet[10624]: E1011 22:04:25.536581   10624 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 11 22:04:25 running-upgrade-130000 kubelet[10624]: E1011 22:04:25.536599   10624 projected.go:192] Error preparing data for projected volume kube-api-access-xdg4z for pod kube-system/kube-proxy-zlqf6: configmap "kube-root-ca.crt" not found
	Oct 11 22:04:25 running-upgrade-130000 kubelet[10624]: E1011 22:04:25.536633   10624 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/f84078f5-56fa-4e83-82bc-dd9124714122-kube-api-access-xdg4z podName:f84078f5-56fa-4e83-82bc-dd9124714122 nodeName:}" failed. No retries permitted until 2024-10-11 22:04:26.036624623 +0000 UTC m=+13.372836531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xdg4z" (UniqueName: "kubernetes.io/projected/f84078f5-56fa-4e83-82bc-dd9124714122-kube-api-access-xdg4z") pod "kube-proxy-zlqf6" (UID: "f84078f5-56fa-4e83-82bc-dd9124714122") : configmap "kube-root-ca.crt" not found
	Oct 11 22:04:25 running-upgrade-130000 kubelet[10624]: E1011 22:04:25.636024   10624 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 11 22:04:25 running-upgrade-130000 kubelet[10624]: E1011 22:04:25.636048   10624 projected.go:192] Error preparing data for projected volume kube-api-access-prggj for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Oct 11 22:04:25 running-upgrade-130000 kubelet[10624]: E1011 22:04:25.636079   10624 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/b5f81602-9116-491d-99c8-750e34833f77-kube-api-access-prggj podName:b5f81602-9116-491d-99c8-750e34833f77 nodeName:}" failed. No retries permitted until 2024-10-11 22:04:26.636069921 +0000 UTC m=+13.972281829 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-prggj" (UniqueName: "kubernetes.io/projected/b5f81602-9116-491d-99c8-750e34833f77-kube-api-access-prggj") pod "storage-provisioner" (UID: "b5f81602-9116-491d-99c8-750e34833f77") : configmap "kube-root-ca.crt" not found
	Oct 11 22:04:25 running-upgrade-130000 kubelet[10624]: I1011 22:04:25.872699   10624 topology_manager.go:200] "Topology Admit Handler"
	Oct 11 22:04:25 running-upgrade-130000 kubelet[10624]: I1011 22:04:25.880321   10624 topology_manager.go:200] "Topology Admit Handler"
	Oct 11 22:04:26 running-upgrade-130000 kubelet[10624]: I1011 22:04:26.038356   10624 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38140a20-5848-43af-aee6-03c489a7a544-config-volume\") pod \"coredns-6d4b75cb6d-pxz4n\" (UID: \"38140a20-5848-43af-aee6-03c489a7a544\") " pod="kube-system/coredns-6d4b75cb6d-pxz4n"
	Oct 11 22:04:26 running-upgrade-130000 kubelet[10624]: I1011 22:04:26.038389   10624 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dg2n\" (UniqueName: \"kubernetes.io/projected/38140a20-5848-43af-aee6-03c489a7a544-kube-api-access-8dg2n\") pod \"coredns-6d4b75cb6d-pxz4n\" (UID: \"38140a20-5848-43af-aee6-03c489a7a544\") " pod="kube-system/coredns-6d4b75cb6d-pxz4n"
	Oct 11 22:04:26 running-upgrade-130000 kubelet[10624]: I1011 22:04:26.038423   10624 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjkxj\" (UniqueName: \"kubernetes.io/projected/a41d0d42-35d4-4705-942e-d26fa015c6f8-kube-api-access-rjkxj\") pod \"coredns-6d4b75cb6d-z6msc\" (UID: \"a41d0d42-35d4-4705-942e-d26fa015c6f8\") " pod="kube-system/coredns-6d4b75cb6d-z6msc"
	Oct 11 22:04:26 running-upgrade-130000 kubelet[10624]: I1011 22:04:26.038435   10624 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a41d0d42-35d4-4705-942e-d26fa015c6f8-config-volume\") pod \"coredns-6d4b75cb6d-z6msc\" (UID: \"a41d0d42-35d4-4705-942e-d26fa015c6f8\") " pod="kube-system/coredns-6d4b75cb6d-z6msc"
	Oct 11 22:08:14 running-upgrade-130000 kubelet[10624]: I1011 22:08:14.121745   10624 scope.go:110] "RemoveContainer" containerID="7f1165bcc644a7b67a6c1f6bce07ca519b9dab89e6658749726abc5996f0bbf3"
	Oct 11 22:08:14 running-upgrade-130000 kubelet[10624]: I1011 22:08:14.143574   10624 scope.go:110] "RemoveContainer" containerID="eb84c0e2fa42de3ef88dd6c6c9d1aa4787e783e0a52c952a9814715262163f8c"
	
	
	==> storage-provisioner [bbaa751bccbf] <==
	I1011 22:04:26.871600       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1011 22:04:26.879108       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1011 22:04:26.879332       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1011 22:04:26.883915       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1011 22:04:26.883974       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-130000_d8319044-d7f6-4f2d-b0e3-3594f24ffa59!
	I1011 22:04:26.884605       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d48ad2aa-d3bd-4a6e-a196-ecfeacdd7e86", APIVersion:"v1", ResourceVersion:"384", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-130000_d8319044-d7f6-4f2d-b0e3-3594f24ffa59 became leader
	I1011 22:04:26.985270       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-130000_d8319044-d7f6-4f2d-b0e3-3594f24ffa59!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-130000 -n running-upgrade-130000
E1011 15:08:45.190116    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-130000 -n running-upgrade-130000: exit status 2 (15.703334334s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-130000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-130000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-130000
--- FAIL: TestRunningBinaryUpgrade (605.62s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.4s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-463000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-463000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.969523459s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-463000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-463000" primary control-plane node in "kubernetes-upgrade-463000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-463000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:01:44.396108    5079 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:01:44.396259    5079 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:01:44.396262    5079 out.go:358] Setting ErrFile to fd 2...
	I1011 15:01:44.396265    5079 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:01:44.396412    5079 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:01:44.397577    5079 out.go:352] Setting JSON to false
	I1011 15:01:44.415296    5079 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5474,"bootTime":1728678630,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:01:44.415369    5079 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:01:44.420518    5079 out.go:177] * [kubernetes-upgrade-463000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:01:44.428526    5079 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:01:44.428619    5079 notify.go:220] Checking for updates...
	I1011 15:01:44.435417    5079 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:01:44.438411    5079 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:01:44.441540    5079 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:01:44.444477    5079 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:01:44.447387    5079 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:01:44.450862    5079 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:01:44.450935    5079 config.go:182] Loaded profile config "running-upgrade-130000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 15:01:44.450992    5079 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:01:44.454415    5079 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 15:01:44.461486    5079 start.go:297] selected driver: qemu2
	I1011 15:01:44.461492    5079 start.go:901] validating driver "qemu2" against <nil>
	I1011 15:01:44.461507    5079 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:01:44.463900    5079 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 15:01:44.466443    5079 out.go:177] * Automatically selected the socket_vmnet network
	I1011 15:01:44.469584    5079 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1011 15:01:44.469603    5079 cni.go:84] Creating CNI manager for ""
	I1011 15:01:44.469642    5079 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1011 15:01:44.469686    5079 start.go:340] cluster config:
	{Name:kubernetes-upgrade-463000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:01:44.474437    5079 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:01:44.483217    5079 out.go:177] * Starting "kubernetes-upgrade-463000" primary control-plane node in "kubernetes-upgrade-463000" cluster
	I1011 15:01:44.487452    5079 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1011 15:01:44.487482    5079 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1011 15:01:44.487493    5079 cache.go:56] Caching tarball of preloaded images
	I1011 15:01:44.487610    5079 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:01:44.487616    5079 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1011 15:01:44.487675    5079 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/kubernetes-upgrade-463000/config.json ...
	I1011 15:01:44.487685    5079 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/kubernetes-upgrade-463000/config.json: {Name:mk16015ed31e5ab5894223318bbe6cc4b75e8082 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:01:44.487995    5079 start.go:360] acquireMachinesLock for kubernetes-upgrade-463000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:01:44.488051    5079 start.go:364] duration metric: took 47.667µs to acquireMachinesLock for "kubernetes-upgrade-463000"
	I1011 15:01:44.488065    5079 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:01:44.488111    5079 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:01:44.491458    5079 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 15:01:44.513954    5079 start.go:159] libmachine.API.Create for "kubernetes-upgrade-463000" (driver="qemu2")
	I1011 15:01:44.513998    5079 client.go:168] LocalClient.Create starting
	I1011 15:01:44.514099    5079 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:01:44.514145    5079 main.go:141] libmachine: Decoding PEM data...
	I1011 15:01:44.514156    5079 main.go:141] libmachine: Parsing certificate...
	I1011 15:01:44.514195    5079 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:01:44.514231    5079 main.go:141] libmachine: Decoding PEM data...
	I1011 15:01:44.514240    5079 main.go:141] libmachine: Parsing certificate...
	I1011 15:01:44.514670    5079 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:01:44.682846    5079 main.go:141] libmachine: Creating SSH key...
	I1011 15:01:44.842072    5079 main.go:141] libmachine: Creating Disk image...
	I1011 15:01:44.842086    5079 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:01:44.842383    5079 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2
	I1011 15:01:44.853805    5079 main.go:141] libmachine: STDOUT: 
	I1011 15:01:44.853827    5079 main.go:141] libmachine: STDERR: 
	I1011 15:01:44.853899    5079 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2 +20000M
	I1011 15:01:44.862829    5079 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:01:44.862845    5079 main.go:141] libmachine: STDERR: 
	I1011 15:01:44.862866    5079 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2
	I1011 15:01:44.862870    5079 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:01:44.862883    5079 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:01:44.862912    5079 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:89:1f:00:19:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2
	I1011 15:01:44.864789    5079 main.go:141] libmachine: STDOUT: 
	I1011 15:01:44.864803    5079 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:01:44.864824    5079 client.go:171] duration metric: took 350.820709ms to LocalClient.Create
	I1011 15:01:46.866961    5079 start.go:128] duration metric: took 2.378869541s to createHost
	I1011 15:01:46.867015    5079 start.go:83] releasing machines lock for "kubernetes-upgrade-463000", held for 2.378994542s
	W1011 15:01:46.867053    5079 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:01:46.871243    5079 out.go:177] * Deleting "kubernetes-upgrade-463000" in qemu2 ...
	W1011 15:01:46.886210    5079 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:01:46.886230    5079 start.go:729] Will try again in 5 seconds ...
	I1011 15:01:51.888443    5079 start.go:360] acquireMachinesLock for kubernetes-upgrade-463000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:01:51.889093    5079 start.go:364] duration metric: took 533.334µs to acquireMachinesLock for "kubernetes-upgrade-463000"
	I1011 15:01:51.889252    5079 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:01:51.889619    5079 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:01:51.895213    5079 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 15:01:51.945043    5079 start.go:159] libmachine.API.Create for "kubernetes-upgrade-463000" (driver="qemu2")
	I1011 15:01:51.945092    5079 client.go:168] LocalClient.Create starting
	I1011 15:01:51.945261    5079 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:01:51.945363    5079 main.go:141] libmachine: Decoding PEM data...
	I1011 15:01:51.945379    5079 main.go:141] libmachine: Parsing certificate...
	I1011 15:01:51.945456    5079 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:01:51.945515    5079 main.go:141] libmachine: Decoding PEM data...
	I1011 15:01:51.945536    5079 main.go:141] libmachine: Parsing certificate...
	I1011 15:01:51.946210    5079 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:01:52.114892    5079 main.go:141] libmachine: Creating SSH key...
	I1011 15:01:52.267025    5079 main.go:141] libmachine: Creating Disk image...
	I1011 15:01:52.267033    5079 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:01:52.267295    5079 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2
	I1011 15:01:52.277629    5079 main.go:141] libmachine: STDOUT: 
	I1011 15:01:52.277651    5079 main.go:141] libmachine: STDERR: 
	I1011 15:01:52.277711    5079 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2 +20000M
	I1011 15:01:52.286628    5079 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:01:52.286641    5079 main.go:141] libmachine: STDERR: 
	I1011 15:01:52.286662    5079 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2
	I1011 15:01:52.286668    5079 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:01:52.286673    5079 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:01:52.286706    5079 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:bc:d2:3d:52:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2
	I1011 15:01:52.288573    5079 main.go:141] libmachine: STDOUT: 
	I1011 15:01:52.288588    5079 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:01:52.288599    5079 client.go:171] duration metric: took 343.506167ms to LocalClient.Create
	I1011 15:01:54.290755    5079 start.go:128] duration metric: took 2.401139875s to createHost
	I1011 15:01:54.290825    5079 start.go:83] releasing machines lock for "kubernetes-upgrade-463000", held for 2.401745s
	W1011 15:01:54.291179    5079 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:01:54.302795    5079 out.go:201] 
	W1011 15:01:54.306953    5079 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:01:54.306977    5079 out.go:270] * 
	* 
	W1011 15:01:54.309541    5079 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:01:54.319841    5079 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-463000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-463000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-463000: (1.989162s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-463000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-463000 status --format={{.Host}}: exit status 7 (62.43975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-463000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-463000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.190499625s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-463000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-463000" primary control-plane node in "kubernetes-upgrade-463000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-463000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-463000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:01:56.420712    5107 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:01:56.420866    5107 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:01:56.420869    5107 out.go:358] Setting ErrFile to fd 2...
	I1011 15:01:56.420872    5107 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:01:56.421020    5107 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:01:56.422142    5107 out.go:352] Setting JSON to false
	I1011 15:01:56.439877    5107 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5486,"bootTime":1728678630,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:01:56.439951    5107 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:01:56.443842    5107 out.go:177] * [kubernetes-upgrade-463000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:01:56.449655    5107 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:01:56.449727    5107 notify.go:220] Checking for updates...
	I1011 15:01:56.457542    5107 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:01:56.460679    5107 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:01:56.464639    5107 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:01:56.467620    5107 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:01:56.470704    5107 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:01:56.473982    5107 config.go:182] Loaded profile config "kubernetes-upgrade-463000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1011 15:01:56.474259    5107 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:01:56.478650    5107 out.go:177] * Using the qemu2 driver based on existing profile
	I1011 15:01:56.485750    5107 start.go:297] selected driver: qemu2
	I1011 15:01:56.485758    5107 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:01:56.485819    5107 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:01:56.488301    5107 cni.go:84] Creating CNI manager for ""
	I1011 15:01:56.488334    5107 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 15:01:56.488364    5107 start.go:340] cluster config:
	{Name:kubernetes-upgrade-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-463000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:01:56.492694    5107 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:01:56.500634    5107 out.go:177] * Starting "kubernetes-upgrade-463000" primary control-plane node in "kubernetes-upgrade-463000" cluster
	I1011 15:01:56.503595    5107 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 15:01:56.503614    5107 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 15:01:56.503622    5107 cache.go:56] Caching tarball of preloaded images
	I1011 15:01:56.503709    5107 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:01:56.503715    5107 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 15:01:56.503762    5107 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/kubernetes-upgrade-463000/config.json ...
	I1011 15:01:56.504191    5107 start.go:360] acquireMachinesLock for kubernetes-upgrade-463000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:01:56.504220    5107 start.go:364] duration metric: took 22.291µs to acquireMachinesLock for "kubernetes-upgrade-463000"
	I1011 15:01:56.504229    5107 start.go:96] Skipping create...Using existing machine configuration
	I1011 15:01:56.504234    5107 fix.go:54] fixHost starting: 
	I1011 15:01:56.504349    5107 fix.go:112] recreateIfNeeded on kubernetes-upgrade-463000: state=Stopped err=<nil>
	W1011 15:01:56.504355    5107 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 15:01:56.511707    5107 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-463000" ...
	I1011 15:01:56.515636    5107 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:01:56.515676    5107 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:bc:d2:3d:52:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2
	I1011 15:01:56.517683    5107 main.go:141] libmachine: STDOUT: 
	I1011 15:01:56.517700    5107 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:01:56.517727    5107 fix.go:56] duration metric: took 13.491209ms for fixHost
	I1011 15:01:56.517732    5107 start.go:83] releasing machines lock for "kubernetes-upgrade-463000", held for 13.507959ms
	W1011 15:01:56.517736    5107 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:01:56.517785    5107 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:01:56.517788    5107 start.go:729] Will try again in 5 seconds ...
	I1011 15:02:01.519924    5107 start.go:360] acquireMachinesLock for kubernetes-upgrade-463000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:02:01.520413    5107 start.go:364] duration metric: took 380.292µs to acquireMachinesLock for "kubernetes-upgrade-463000"
	I1011 15:02:01.520549    5107 start.go:96] Skipping create...Using existing machine configuration
	I1011 15:02:01.520567    5107 fix.go:54] fixHost starting: 
	I1011 15:02:01.521168    5107 fix.go:112] recreateIfNeeded on kubernetes-upgrade-463000: state=Stopped err=<nil>
	W1011 15:02:01.521187    5107 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 15:02:01.528529    5107 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-463000" ...
	I1011 15:02:01.532636    5107 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:02:01.532816    5107 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:bc:d2:3d:52:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2
	I1011 15:02:01.541759    5107 main.go:141] libmachine: STDOUT: 
	I1011 15:02:01.541821    5107 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:02:01.541920    5107 fix.go:56] duration metric: took 21.354708ms for fixHost
	I1011 15:02:01.541942    5107 start.go:83] releasing machines lock for "kubernetes-upgrade-463000", held for 21.508875ms
	W1011 15:02:01.542184    5107 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-463000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-463000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:02:01.550581    5107 out.go:201] 
	W1011 15:02:01.554700    5107 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:02:01.554729    5107 out.go:270] * 
	* 
	W1011 15:02:01.556699    5107 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:02:01.565482    5107 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-463000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-463000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-463000 version --output=json: exit status 1 (62.531583ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-463000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-10-11 15:02:01.643619 -0700 PDT m=+3871.066805792
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-463000 -n kubernetes-upgrade-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-463000 -n kubernetes-upgrade-463000: exit status 7 (36.489792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-463000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-463000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-463000
--- FAIL: TestKubernetesUpgrade (17.40s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.18s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19749
- KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4083159104/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.18s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.89s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19749
- KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1717482195/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (574.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4025647520 start -p stopped-upgrade-583000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4025647520 start -p stopped-upgrade-583000 --memory=2200 --vm-driver=qemu2 : (40.480560959s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4025647520 -p stopped-upgrade-583000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4025647520 -p stopped-upgrade-583000 stop: (12.105857125s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-583000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E1011 15:03:28.298596    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 15:03:45.195509    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/functional-044000/client.crt: no such file or directory" logger="UnhandledError"
E1011 15:06:29.283093    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-583000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.471675s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-583000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-583000" primary control-plane node in "stopped-upgrade-583000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-583000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:02:55.536075    5145 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:02:55.536229    5145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:02:55.536233    5145 out.go:358] Setting ErrFile to fd 2...
	I1011 15:02:55.536236    5145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:02:55.536360    5145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:02:55.537412    5145 out.go:352] Setting JSON to false
	I1011 15:02:55.556703    5145 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5545,"bootTime":1728678630,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:02:55.556792    5145 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:02:55.561200    5145 out.go:177] * [stopped-upgrade-583000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:02:55.569095    5145 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:02:55.569144    5145 notify.go:220] Checking for updates...
	I1011 15:02:55.576998    5145 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:02:55.580031    5145 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:02:55.583966    5145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:02:55.587042    5145 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:02:55.590042    5145 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:02:55.593282    5145 config.go:182] Loaded profile config "stopped-upgrade-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 15:02:55.597033    5145 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1011 15:02:55.598289    5145 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:02:55.602042    5145 out.go:177] * Using the qemu2 driver based on existing profile
	I1011 15:02:55.608873    5145 start.go:297] selected driver: qemu2
	I1011 15:02:55.608880    5145 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-583000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57470 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-583000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1011 15:02:55.608955    5145 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:02:55.611831    5145 cni.go:84] Creating CNI manager for ""
	I1011 15:02:55.611872    5145 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 15:02:55.611894    5145 start.go:340] cluster config:
	{Name:stopped-upgrade-583000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57470 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-583000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1011 15:02:55.611951    5145 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:02:55.620012    5145 out.go:177] * Starting "stopped-upgrade-583000" primary control-plane node in "stopped-upgrade-583000" cluster
	I1011 15:02:55.624004    5145 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1011 15:02:55.624017    5145 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1011 15:02:55.624023    5145 cache.go:56] Caching tarball of preloaded images
	I1011 15:02:55.624080    5145 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:02:55.624086    5145 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1011 15:02:55.624139    5145 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/config.json ...
	I1011 15:02:55.624560    5145 start.go:360] acquireMachinesLock for stopped-upgrade-583000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:02:55.624590    5145 start.go:364] duration metric: took 24.917µs to acquireMachinesLock for "stopped-upgrade-583000"
	I1011 15:02:55.624600    5145 start.go:96] Skipping create...Using existing machine configuration
	I1011 15:02:55.624605    5145 fix.go:54] fixHost starting: 
	I1011 15:02:55.624716    5145 fix.go:112] recreateIfNeeded on stopped-upgrade-583000: state=Stopped err=<nil>
	W1011 15:02:55.624726    5145 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 15:02:55.632041    5145 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-583000" ...
	I1011 15:02:55.636039    5145 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:02:55.636121    5145 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/qemu.pid -nic user,model=virtio,hostfwd=tcp::57437-:22,hostfwd=tcp::57438-:2376,hostname=stopped-upgrade-583000 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/disk.qcow2
	I1011 15:02:55.683568    5145 main.go:141] libmachine: STDOUT: 
	I1011 15:02:55.683702    5145 main.go:141] libmachine: STDERR: 
	I1011 15:02:55.683715    5145 main.go:141] libmachine: Waiting for VM to start (ssh -p 57437 docker@127.0.0.1)...
	I1011 15:03:15.445838    5145 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/config.json ...
	I1011 15:03:15.446159    5145 machine.go:93] provisionDockerMachine start ...
	I1011 15:03:15.446248    5145 main.go:141] libmachine: Using SSH client type: native
	I1011 15:03:15.446429    5145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100526480] 0x100528cc0 <nil>  [] 0s} localhost 57437 <nil> <nil>}
	I1011 15:03:15.446435    5145 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 15:03:15.514280    5145 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1011 15:03:15.514296    5145 buildroot.go:166] provisioning hostname "stopped-upgrade-583000"
	I1011 15:03:15.514356    5145 main.go:141] libmachine: Using SSH client type: native
	I1011 15:03:15.514468    5145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100526480] 0x100528cc0 <nil>  [] 0s} localhost 57437 <nil> <nil>}
	I1011 15:03:15.514475    5145 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-583000 && echo "stopped-upgrade-583000" | sudo tee /etc/hostname
	I1011 15:03:15.582019    5145 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-583000
	
	I1011 15:03:15.582077    5145 main.go:141] libmachine: Using SSH client type: native
	I1011 15:03:15.582188    5145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100526480] 0x100528cc0 <nil>  [] 0s} localhost 57437 <nil> <nil>}
	I1011 15:03:15.582196    5145 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-583000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-583000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-583000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 15:03:15.649038    5145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 15:03:15.649051    5145 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19749-1186/.minikube CaCertPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19749-1186/.minikube}
	I1011 15:03:15.649058    5145 buildroot.go:174] setting up certificates
	I1011 15:03:15.649063    5145 provision.go:84] configureAuth start
	I1011 15:03:15.649071    5145 provision.go:143] copyHostCerts
	I1011 15:03:15.649144    5145 exec_runner.go:144] found /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.pem, removing ...
	I1011 15:03:15.649151    5145 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.pem
	I1011 15:03:15.649362    5145 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.pem (1078 bytes)
	I1011 15:03:15.649567    5145 exec_runner.go:144] found /Users/jenkins/minikube-integration/19749-1186/.minikube/cert.pem, removing ...
	I1011 15:03:15.649572    5145 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19749-1186/.minikube/cert.pem
	I1011 15:03:15.649615    5145 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19749-1186/.minikube/cert.pem (1123 bytes)
	I1011 15:03:15.649718    5145 exec_runner.go:144] found /Users/jenkins/minikube-integration/19749-1186/.minikube/key.pem, removing ...
	I1011 15:03:15.649721    5145 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19749-1186/.minikube/key.pem
	I1011 15:03:15.649760    5145 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19749-1186/.minikube/key.pem (1675 bytes)
	I1011 15:03:15.649848    5145 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-583000 san=[127.0.0.1 localhost minikube stopped-upgrade-583000]
	I1011 15:03:15.769863    5145 provision.go:177] copyRemoteCerts
	I1011 15:03:15.769911    5145 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 15:03:15.769919    5145 sshutil.go:53] new ssh client: &{IP:localhost Port:57437 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/id_rsa Username:docker}
	I1011 15:03:15.804141    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1011 15:03:15.810828    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 15:03:15.817437    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1011 15:03:15.824607    5145 provision.go:87] duration metric: took 175.535792ms to configureAuth
	I1011 15:03:15.824616    5145 buildroot.go:189] setting minikube options for container-runtime
	I1011 15:03:15.824737    5145 config.go:182] Loaded profile config "stopped-upgrade-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 15:03:15.824787    5145 main.go:141] libmachine: Using SSH client type: native
	I1011 15:03:15.824877    5145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100526480] 0x100528cc0 <nil>  [] 0s} localhost 57437 <nil> <nil>}
	I1011 15:03:15.824881    5145 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1011 15:03:15.889963    5145 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1011 15:03:15.889972    5145 buildroot.go:70] root file system type: tmpfs
	I1011 15:03:15.890025    5145 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1011 15:03:15.890081    5145 main.go:141] libmachine: Using SSH client type: native
	I1011 15:03:15.890180    5145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100526480] 0x100528cc0 <nil>  [] 0s} localhost 57437 <nil> <nil>}
	I1011 15:03:15.890213    5145 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1011 15:03:15.958753    5145 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1011 15:03:15.958813    5145 main.go:141] libmachine: Using SSH client type: native
	I1011 15:03:15.958932    5145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100526480] 0x100528cc0 <nil>  [] 0s} localhost 57437 <nil> <nil>}
	I1011 15:03:15.958942    5145 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1011 15:03:16.334926    5145 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1011 15:03:16.334940    5145 machine.go:96] duration metric: took 888.78925ms to provisionDockerMachine
	I1011 15:03:16.334946    5145 start.go:293] postStartSetup for "stopped-upgrade-583000" (driver="qemu2")
	I1011 15:03:16.334954    5145 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 15:03:16.335042    5145 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 15:03:16.335053    5145 sshutil.go:53] new ssh client: &{IP:localhost Port:57437 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/id_rsa Username:docker}
	I1011 15:03:16.369047    5145 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 15:03:16.370303    5145 info.go:137] Remote host: Buildroot 2021.02.12
	I1011 15:03:16.370310    5145 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19749-1186/.minikube/addons for local assets ...
	I1011 15:03:16.370382    5145 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19749-1186/.minikube/files for local assets ...
	I1011 15:03:16.370477    5145 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19749-1186/.minikube/files/etc/ssl/certs/17072.pem -> 17072.pem in /etc/ssl/certs
	I1011 15:03:16.370584    5145 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 15:03:16.373376    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/files/etc/ssl/certs/17072.pem --> /etc/ssl/certs/17072.pem (1708 bytes)
	I1011 15:03:16.380397    5145 start.go:296] duration metric: took 45.446209ms for postStartSetup
	I1011 15:03:16.380411    5145 fix.go:56] duration metric: took 20.756134875s for fixHost
	I1011 15:03:16.380455    5145 main.go:141] libmachine: Using SSH client type: native
	I1011 15:03:16.380573    5145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100526480] 0x100528cc0 <nil>  [] 0s} localhost 57437 <nil> <nil>}
	I1011 15:03:16.380577    5145 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1011 15:03:16.446187    5145 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728684196.936639546
	
	I1011 15:03:16.446198    5145 fix.go:216] guest clock: 1728684196.936639546
	I1011 15:03:16.446202    5145 fix.go:229] Guest: 2024-10-11 15:03:16.936639546 -0700 PDT Remote: 2024-10-11 15:03:16.380413 -0700 PDT m=+20.866889834 (delta=556.226546ms)
	I1011 15:03:16.446216    5145 fix.go:200] guest clock delta is within tolerance: 556.226546ms
	I1011 15:03:16.446222    5145 start.go:83] releasing machines lock for "stopped-upgrade-583000", held for 20.821955292s
	I1011 15:03:16.446292    5145 ssh_runner.go:195] Run: cat /version.json
	I1011 15:03:16.446297    5145 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 15:03:16.446300    5145 sshutil.go:53] new ssh client: &{IP:localhost Port:57437 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/id_rsa Username:docker}
	I1011 15:03:16.446317    5145 sshutil.go:53] new ssh client: &{IP:localhost Port:57437 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/id_rsa Username:docker}
	W1011 15:03:16.446810    5145 sshutil.go:64] dial failure (will retry): dial tcp [::1]:57437: connect: connection refused
	I1011 15:03:16.446828    5145 retry.go:31] will retry after 289.694169ms: dial tcp [::1]:57437: connect: connection refused
	W1011 15:03:16.477695    5145 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1011 15:03:16.477743    5145 ssh_runner.go:195] Run: systemctl --version
	I1011 15:03:16.479652    5145 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1011 15:03:16.481276    5145 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1011 15:03:16.481313    5145 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1011 15:03:16.484527    5145 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1011 15:03:16.489373    5145 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 15:03:16.489381    5145 start.go:495] detecting cgroup driver to use...
	I1011 15:03:16.489462    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 15:03:16.496659    5145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1011 15:03:16.500439    5145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1011 15:03:16.503587    5145 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1011 15:03:16.503618    5145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1011 15:03:16.509032    5145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 15:03:16.512606    5145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1011 15:03:16.515667    5145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 15:03:16.518776    5145 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 15:03:16.521728    5145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1011 15:03:16.524700    5145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1011 15:03:16.527833    5145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1011 15:03:16.530783    5145 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 15:03:16.533895    5145 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 15:03:16.538596    5145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 15:03:16.611704    5145 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1011 15:03:16.617572    5145 start.go:495] detecting cgroup driver to use...
	I1011 15:03:16.617655    5145 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1011 15:03:16.624052    5145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 15:03:16.629294    5145 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1011 15:03:16.635310    5145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1011 15:03:16.639489    5145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1011 15:03:16.643876    5145 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1011 15:03:16.711007    5145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1011 15:03:16.716329    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 15:03:16.721974    5145 ssh_runner.go:195] Run: which cri-dockerd
	I1011 15:03:16.723260    5145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1011 15:03:16.725814    5145 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1011 15:03:16.730706    5145 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1011 15:03:16.799892    5145 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1011 15:03:16.934140    5145 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1011 15:03:16.934198    5145 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1011 15:03:16.939259    5145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 15:03:17.028077    5145 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1011 15:03:18.183944    5145 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.155864583s)
	I1011 15:03:18.184029    5145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1011 15:03:18.188616    5145 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1011 15:03:18.194810    5145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1011 15:03:18.199543    5145 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1011 15:03:18.268262    5145 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1011 15:03:18.345005    5145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 15:03:18.421358    5145 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1011 15:03:18.427913    5145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1011 15:03:18.432647    5145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 15:03:18.507687    5145 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1011 15:03:18.546182    5145 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1011 15:03:18.546274    5145 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1011 15:03:18.548339    5145 start.go:563] Will wait 60s for crictl version
	I1011 15:03:18.548413    5145 ssh_runner.go:195] Run: which crictl
	I1011 15:03:18.549851    5145 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 15:03:18.566060    5145 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1011 15:03:18.566136    5145 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1011 15:03:18.583469    5145 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1011 15:03:18.607825    5145 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1011 15:03:18.607974    5145 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1011 15:03:18.609306    5145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 15:03:18.613185    5145 kubeadm.go:883] updating cluster {Name:stopped-upgrade-583000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57470 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-583000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1011 15:03:18.613238    5145 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1011 15:03:18.613285    5145 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1011 15:03:18.623956    5145 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1011 15:03:18.623965    5145 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1011 15:03:18.624031    5145 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1011 15:03:18.627256    5145 ssh_runner.go:195] Run: which lz4
	I1011 15:03:18.628480    5145 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1011 15:03:18.629678    5145 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1011 15:03:18.629687    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1011 15:03:19.591818    5145 docker.go:653] duration metric: took 963.398167ms to copy over tarball
	I1011 15:03:19.591907    5145 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1011 15:03:20.791859    5145 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.199946208s)
	I1011 15:03:20.791873    5145 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1011 15:03:20.807349    5145 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1011 15:03:20.810139    5145 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1011 15:03:20.815124    5145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 15:03:20.897696    5145 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1011 15:03:22.428426    5145 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.530737667s)
	I1011 15:03:22.428539    5145 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1011 15:03:22.444073    5145 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1011 15:03:22.444082    5145 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1011 15:03:22.444090    5145 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 15:03:22.450522    5145 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 15:03:22.451674    5145 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1011 15:03:22.452773    5145 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 15:03:22.453069    5145 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1011 15:03:22.454685    5145 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1011 15:03:22.454828    5145 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1011 15:03:22.456374    5145 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1011 15:03:22.456809    5145 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1011 15:03:22.457351    5145 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1011 15:03:22.457454    5145 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1011 15:03:22.458785    5145 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1011 15:03:22.459172    5145 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1011 15:03:22.459671    5145 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1011 15:03:22.459916    5145 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1011 15:03:22.461378    5145 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1011 15:03:22.461457    5145 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1011 15:03:23.039730    5145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1011 15:03:23.042473    5145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1011 15:03:23.051972    5145 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1011 15:03:23.052003    5145 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1011 15:03:23.052069    5145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1011 15:03:23.059651    5145 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1011 15:03:23.059684    5145 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1011 15:03:23.059725    5145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1011 15:03:23.065031    5145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1011 15:03:23.070373    5145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1011 15:03:23.071103    5145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1011 15:03:23.081148    5145 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1011 15:03:23.081173    5145 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1011 15:03:23.081226    5145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1011 15:03:23.093179    5145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1011 15:03:23.118331    5145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1011 15:03:23.128594    5145 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1011 15:03:23.128623    5145 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1011 15:03:23.128683    5145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1011 15:03:23.138344    5145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1011 15:03:23.138487    5145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1011 15:03:23.139980    5145 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1011 15:03:23.139992    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1011 15:03:23.148827    5145 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1011 15:03:23.148835    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1011 15:03:23.154918    5145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1011 15:03:23.183881    5145 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1011 15:03:23.183924    5145 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1011 15:03:23.183942    5145 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1011 15:03:23.184003    5145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1011 15:03:23.195023    5145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W1011 15:03:23.228583    5145 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1011 15:03:23.228751    5145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1011 15:03:23.238978    5145 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1011 15:03:23.239000    5145 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1011 15:03:23.239069    5145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1011 15:03:23.249571    5145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1011 15:03:23.249727    5145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1011 15:03:23.251222    5145 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1011 15:03:23.251236    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1011 15:03:23.292634    5145 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1011 15:03:23.292647    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1011 15:03:23.332335    5145 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1011 15:03:23.361751    5145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	W1011 15:03:23.370763    5145 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1011 15:03:23.370901    5145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 15:03:23.372373    5145 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1011 15:03:23.372392    5145 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1011 15:03:23.372435    5145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1011 15:03:23.384289    5145 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1011 15:03:23.384315    5145 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 15:03:23.384381    5145 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 15:03:23.391685    5145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1011 15:03:23.391831    5145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1011 15:03:23.401606    5145 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1011 15:03:23.401636    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1011 15:03:23.401672    5145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1011 15:03:23.401795    5145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1011 15:03:23.403743    5145 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1011 15:03:23.403764    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1011 15:03:23.471190    5145 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1011 15:03:23.471203    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1011 15:03:23.848898    5145 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1011 15:03:23.848927    5145 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1011 15:03:23.848935    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1011 15:03:23.987459    5145 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1011 15:03:23.987502    5145 cache_images.go:92] duration metric: took 1.543429291s to LoadCachedImages
	W1011 15:03:23.987546    5145 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1011 15:03:23.987552    5145 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1011 15:03:23.987609    5145 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-583000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-583000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 15:03:23.987688    5145 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1011 15:03:24.001605    5145 cni.go:84] Creating CNI manager for ""
	I1011 15:03:24.001617    5145 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 15:03:24.001623    5145 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 15:03:24.001631    5145 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-583000 NodeName:stopped-upgrade-583000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 15:03:24.001706    5145 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-583000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 15:03:24.001776    5145 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1011 15:03:24.005077    5145 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 15:03:24.005117    5145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 15:03:24.008377    5145 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1011 15:03:24.013495    5145 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 15:03:24.018697    5145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1011 15:03:24.024011    5145 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1011 15:03:24.025242    5145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 15:03:24.028960    5145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 15:03:24.109891    5145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 15:03:24.115529    5145 certs.go:68] Setting up /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000 for IP: 10.0.2.15
	I1011 15:03:24.115541    5145 certs.go:194] generating shared ca certs ...
	I1011 15:03:24.115550    5145 certs.go:226] acquiring lock for ca certs: {Name:mk35edffff951ee63400693cabf88751b6257cd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:03:24.115743    5145 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.key
	I1011 15:03:24.116475    5145 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/proxy-client-ca.key
	I1011 15:03:24.116483    5145 certs.go:256] generating profile certs ...
	I1011 15:03:24.116713    5145 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/client.key
	I1011 15:03:24.116730    5145 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.key.dabe18a6
	I1011 15:03:24.116743    5145 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.crt.dabe18a6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1011 15:03:24.188646    5145 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.crt.dabe18a6 ...
	I1011 15:03:24.188658    5145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.crt.dabe18a6: {Name:mke2e906f6aa60aa296960fd8012aab304f8de9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:03:24.189354    5145 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.key.dabe18a6 ...
	I1011 15:03:24.189361    5145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.key.dabe18a6: {Name:mk4e6f11d67b071a3f770925a637f8f17d79183f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:03:24.189538    5145 certs.go:381] copying /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.crt.dabe18a6 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.crt
	I1011 15:03:24.189666    5145 certs.go:385] copying /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.key.dabe18a6 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.key
	I1011 15:03:24.189904    5145 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/proxy-client.key
	I1011 15:03:24.190056    5145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/1707.pem (1338 bytes)
	W1011 15:03:24.190220    5145 certs.go:480] ignoring /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/1707_empty.pem, impossibly tiny 0 bytes
	I1011 15:03:24.190229    5145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca-key.pem (1679 bytes)
	I1011 15:03:24.190249    5145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem (1078 bytes)
	I1011 15:03:24.190272    5145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem (1123 bytes)
	I1011 15:03:24.190293    5145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/key.pem (1675 bytes)
	I1011 15:03:24.190345    5145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19749-1186/.minikube/files/etc/ssl/certs/17072.pem (1708 bytes)
	I1011 15:03:24.190706    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 15:03:24.197738    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 15:03:24.204873    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 15:03:24.212206    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 15:03:24.220284    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1011 15:03:24.226997    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 15:03:24.233310    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 15:03:24.240550    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 15:03:24.247683    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 15:03:24.253946    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/1707.pem --> /usr/share/ca-certificates/1707.pem (1338 bytes)
	I1011 15:03:24.261005    5145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19749-1186/.minikube/files/etc/ssl/certs/17072.pem --> /usr/share/ca-certificates/17072.pem (1708 bytes)
	I1011 15:03:24.268195    5145 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 15:03:24.273451    5145 ssh_runner.go:195] Run: openssl version
	I1011 15:03:24.275400    5145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 15:03:24.278240    5145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 15:03:24.279593    5145 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I1011 15:03:24.279616    5145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 15:03:24.281198    5145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 15:03:24.284560    5145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1707.pem && ln -fs /usr/share/ca-certificates/1707.pem /etc/ssl/certs/1707.pem"
	I1011 15:03:24.287751    5145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1707.pem
	I1011 15:03:24.289274    5145 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:05 /usr/share/ca-certificates/1707.pem
	I1011 15:03:24.289304    5145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1707.pem
	I1011 15:03:24.291276    5145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1707.pem /etc/ssl/certs/51391683.0"
	I1011 15:03:24.294236    5145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17072.pem && ln -fs /usr/share/ca-certificates/17072.pem /etc/ssl/certs/17072.pem"
	I1011 15:03:24.297429    5145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17072.pem
	I1011 15:03:24.298868    5145 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:05 /usr/share/ca-certificates/17072.pem
	I1011 15:03:24.298894    5145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17072.pem
	I1011 15:03:24.300473    5145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17072.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 15:03:24.303420    5145 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 15:03:24.304700    5145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 15:03:24.306702    5145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 15:03:24.308517    5145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 15:03:24.310515    5145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 15:03:24.312235    5145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 15:03:24.314646    5145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 15:03:24.316402    5145 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-583000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:57470 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-583000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1011 15:03:24.316476    5145 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1011 15:03:24.326818    5145 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 15:03:24.329957    5145 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 15:03:24.329966    5145 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 15:03:24.329992    5145 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 15:03:24.332907    5145 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 15:03:24.333218    5145 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-583000" does not appear in /Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:03:24.333322    5145 kubeconfig.go:62] /Users/jenkins/minikube-integration/19749-1186/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-583000" cluster setting kubeconfig missing "stopped-upgrade-583000" context setting]
	I1011 15:03:24.333511    5145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/kubeconfig: {Name:mkc848521291f94f61a80272f8eb43a8779805e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:03:24.333951    5145 kapi.go:59] client config for stopped-upgrade-583000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/client.key", CAFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101f7ee40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1011 15:03:24.334428    5145 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 15:03:24.337392    5145 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-583000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1011 15:03:24.337398    5145 kubeadm.go:1160] stopping kube-system containers ...
	I1011 15:03:24.337445    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1011 15:03:24.348532    5145 docker.go:483] Stopping containers: [3147d798970d b001d59290a4 e5ff18c232f1 26a6947a1458 cd8a136a40f5 e7805c8a9be5 f6da21be1d5b d3912344e421]
	I1011 15:03:24.348605    5145 ssh_runner.go:195] Run: docker stop 3147d798970d b001d59290a4 e5ff18c232f1 26a6947a1458 cd8a136a40f5 e7805c8a9be5 f6da21be1d5b d3912344e421
	I1011 15:03:24.359581    5145 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 15:03:24.365419    5145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 15:03:24.368895    5145 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 15:03:24.368901    5145 kubeadm.go:157] found existing configuration files:
	
	I1011 15:03:24.368930    5145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/admin.conf
	I1011 15:03:24.372030    5145 kubeadm.go:163] "https://control-plane.minikube.internal:57470" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 15:03:24.372061    5145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 15:03:24.374648    5145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/kubelet.conf
	I1011 15:03:24.377179    5145 kubeadm.go:163] "https://control-plane.minikube.internal:57470" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 15:03:24.377207    5145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 15:03:24.380266    5145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/controller-manager.conf
	I1011 15:03:24.383037    5145 kubeadm.go:163] "https://control-plane.minikube.internal:57470" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 15:03:24.383071    5145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 15:03:24.385498    5145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/scheduler.conf
	I1011 15:03:24.388551    5145 kubeadm.go:163] "https://control-plane.minikube.internal:57470" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 15:03:24.388578    5145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 15:03:24.391440    5145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 15:03:24.394127    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 15:03:24.416147    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 15:03:24.754606    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 15:03:24.888364    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 15:03:24.909948    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 15:03:24.939810    5145 api_server.go:52] waiting for apiserver process to appear ...
	I1011 15:03:24.939904    5145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 15:03:25.441682    5145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 15:03:25.940131    5145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 15:03:25.944789    5145 api_server.go:72] duration metric: took 1.004989625s to wait for apiserver process to appear ...
	I1011 15:03:25.944801    5145 api_server.go:88] waiting for apiserver healthz status ...
	I1011 15:03:25.944816    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:30.946835    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:30.946878    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:35.947359    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:35.947405    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:40.947907    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:40.947966    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:45.948663    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:45.948684    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:50.949522    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:50.949618    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:03:55.951182    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:03:55.951296    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:00.953520    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:00.953631    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:05.956104    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:05.956127    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:10.958234    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:10.958258    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:15.959439    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:15.959483    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:20.961767    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:20.961816    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:25.963415    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:25.963555    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:04:25.978750    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:04:25.978832    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:04:25.995250    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:04:25.995331    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:04:26.006203    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:04:26.006291    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:04:26.018234    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:04:26.018322    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:04:26.028738    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:04:26.028821    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:04:26.039839    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:04:26.039925    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:04:26.049867    5145 logs.go:282] 0 containers: []
	W1011 15:04:26.049879    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:04:26.049945    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:04:26.061416    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:04:26.061434    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:04:26.061440    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:04:26.073341    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:04:26.073354    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:04:26.087943    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:04:26.087953    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:04:26.115634    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:04:26.115647    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:04:26.132129    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:04:26.132141    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:04:26.148956    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:04:26.148968    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:04:26.160188    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:04:26.160201    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:04:26.172991    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:04:26.173003    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:04:26.190402    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:04:26.190414    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:04:26.203403    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:04:26.203417    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:04:26.247598    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:04:26.247618    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:04:26.366912    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:04:26.366928    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:04:26.380466    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:04:26.380484    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:04:26.397818    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:04:26.397831    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:04:26.416773    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:04:26.416788    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:04:26.421956    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:04:26.421968    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:04:26.435685    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:04:26.435696    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:04:28.965993    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:33.968154    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:33.968428    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:04:33.995520    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:04:33.995666    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:04:34.018530    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:04:34.018619    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:04:34.031932    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:04:34.032039    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:04:34.043939    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:04:34.044021    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:04:34.056520    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:04:34.056586    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:04:34.067675    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:04:34.067756    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:04:34.077794    5145 logs.go:282] 0 containers: []
	W1011 15:04:34.077808    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:04:34.077875    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:04:34.089139    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:04:34.089156    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:04:34.089167    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:04:34.100690    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:04:34.100699    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:04:34.118617    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:04:34.118628    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:04:34.144196    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:04:34.144204    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:04:34.183666    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:04:34.183697    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:04:34.196702    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:04:34.196714    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:04:34.207870    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:04:34.207881    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:04:34.247063    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:04:34.247072    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:04:34.262353    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:04:34.262363    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:04:34.289565    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:04:34.289581    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:04:34.301070    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:04:34.301088    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:04:34.316412    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:04:34.316425    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:04:34.335625    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:04:34.335635    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:04:34.340349    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:04:34.340354    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:04:34.354776    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:04:34.354785    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:04:34.369183    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:04:34.369196    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:04:34.381452    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:04:34.381461    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:04:36.895074    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:41.897298    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:41.897554    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:04:41.921313    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:04:41.921444    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:04:41.937247    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:04:41.937342    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:04:41.951713    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:04:41.951788    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:04:41.962693    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:04:41.962789    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:04:41.973809    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:04:41.973887    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:04:41.984707    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:04:41.984789    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:04:41.995489    5145 logs.go:282] 0 containers: []
	W1011 15:04:41.995499    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:04:41.995560    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:04:42.006263    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:04:42.006281    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:04:42.006288    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:04:42.019170    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:04:42.019184    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:04:42.043358    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:04:42.043368    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:04:42.059260    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:04:42.059273    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:04:42.073873    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:04:42.073882    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:04:42.092533    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:04:42.092545    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:04:42.105143    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:04:42.105172    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:04:42.117324    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:04:42.117339    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:04:42.142146    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:04:42.142156    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:04:42.160141    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:04:42.160150    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:04:42.175429    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:04:42.175439    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:04:42.179583    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:04:42.179589    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:04:42.216016    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:04:42.216030    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:04:42.229954    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:04:42.229964    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:04:42.240961    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:04:42.240971    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:04:42.278743    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:04:42.278751    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:04:42.295192    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:04:42.295202    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:04:44.808657    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:49.810110    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:49.810380    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:04:49.837047    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:04:49.837172    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:04:49.852980    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:04:49.853078    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:04:49.867612    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:04:49.867709    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:04:49.883461    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:04:49.883542    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:04:49.894493    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:04:49.894570    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:04:49.905019    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:04:49.905101    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:04:49.915069    5145 logs.go:282] 0 containers: []
	W1011 15:04:49.915079    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:04:49.915146    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:04:49.925292    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:04:49.925310    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:04:49.925316    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:04:49.929646    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:04:49.929655    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:04:49.965614    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:04:49.965624    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:04:49.980256    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:04:49.980270    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:04:50.012990    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:04:50.013004    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:04:50.031157    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:04:50.031171    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:04:50.043424    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:04:50.043436    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:04:50.081195    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:04:50.081212    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:04:50.096199    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:04:50.096209    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:04:50.121882    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:04:50.121898    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:04:50.133376    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:04:50.133387    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:04:50.147678    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:04:50.147687    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:04:50.159107    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:04:50.159119    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:04:50.171061    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:04:50.171073    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:04:50.182998    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:04:50.183009    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:04:50.200193    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:04:50.200204    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:04:50.211850    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:04:50.211862    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:04:52.729369    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:04:57.730224    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:04:57.730405    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:04:57.746272    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:04:57.746362    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:04:57.758874    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:04:57.758954    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:04:57.770007    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:04:57.770089    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:04:57.780308    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:04:57.780386    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:04:57.790669    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:04:57.790738    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:04:57.801684    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:04:57.801767    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:04:57.811832    5145 logs.go:282] 0 containers: []
	W1011 15:04:57.811847    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:04:57.811917    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:04:57.822561    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:04:57.822576    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:04:57.822580    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:04:57.836124    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:04:57.836133    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:04:57.848013    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:04:57.848025    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:04:57.867273    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:04:57.867283    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:04:57.878918    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:04:57.878929    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:04:57.893109    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:04:57.893121    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:04:57.909676    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:04:57.909689    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:04:57.925335    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:04:57.925347    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:04:57.949497    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:04:57.949508    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:04:57.987288    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:04:57.987299    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:04:58.012890    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:04:58.012901    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:04:58.027633    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:04:58.027648    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:04:58.034600    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:04:58.034621    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:04:58.070901    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:04:58.070912    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:04:58.087395    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:04:58.087406    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:04:58.099243    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:04:58.099253    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:04:58.116707    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:04:58.116717    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:05:00.631225    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:05.633515    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:05.633980    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:05.669758    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:05:05.669958    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:05.695726    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:05:05.695842    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:05.710218    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:05:05.710307    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:05.721818    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:05:05.721891    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:05.732127    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:05:05.732206    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:05.742788    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:05:05.742854    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:05.753443    5145 logs.go:282] 0 containers: []
	W1011 15:05:05.753455    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:05.753517    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:05.765928    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:05:05.765946    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:05:05.765952    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:05:05.791593    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:05:05.791604    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:05:05.804210    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:05.804220    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:05.829646    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:05.829654    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:05:05.868697    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:05.868708    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:05.911970    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:05:05.911981    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:05:05.926104    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:05:05.926115    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:05:05.937569    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:05:05.937581    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:05:05.952774    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:05:05.952785    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:05:05.965230    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:05:05.965240    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:05:05.977240    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:05:05.977253    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:05:05.989367    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:05.989377    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:05.993656    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:05:05.993661    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:05:06.007511    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:05:06.007519    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:05:06.022249    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:05:06.022260    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:05:06.045503    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:05:06.045512    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:05:06.057457    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:05:06.057473    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:08.573568    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:13.576145    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:13.576301    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:13.589288    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:05:13.589370    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:13.600145    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:05:13.600225    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:13.611127    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:05:13.611206    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:13.624567    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:05:13.624642    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:13.634876    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:05:13.634953    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:13.645149    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:05:13.645222    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:13.655327    5145 logs.go:282] 0 containers: []
	W1011 15:05:13.655339    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:13.655405    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:13.666242    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:05:13.666260    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:13.666266    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:13.670468    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:05:13.670477    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:05:13.684399    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:05:13.684408    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:05:13.699918    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:05:13.699928    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:05:13.722594    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:13.722605    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:13.757013    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:05:13.757024    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:05:13.782445    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:05:13.782455    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:05:13.797138    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:05:13.797149    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:05:13.808939    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:05:13.808948    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:05:13.820286    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:05:13.820297    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:13.833460    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:13.833471    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:05:13.872596    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:05:13.872607    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:05:13.893861    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:05:13.893871    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:05:13.904877    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:05:13.904890    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:05:13.916265    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:05:13.916280    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:05:13.933792    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:05:13.933802    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:05:13.944962    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:13.944974    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:16.470969    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:21.473556    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:21.473989    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:21.503859    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:05:21.504006    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:21.522988    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:05:21.523081    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:21.538587    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:05:21.538665    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:21.550420    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:05:21.550526    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:21.561150    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:05:21.561232    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:21.573837    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:05:21.573925    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:21.585155    5145 logs.go:282] 0 containers: []
	W1011 15:05:21.585169    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:21.585234    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:21.596166    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:05:21.596185    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:05:21.596191    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:05:21.608039    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:05:21.608051    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:05:21.620703    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:05:21.620716    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:05:21.632773    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:05:21.632783    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:05:21.645233    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:05:21.645244    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:05:21.664975    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:05:21.664985    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:05:21.682691    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:05:21.682704    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:05:21.694814    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:21.694826    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:05:21.734626    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:05:21.734637    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:05:21.752143    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:05:21.752153    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:05:21.776849    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:05:21.776859    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:21.788684    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:05:21.788695    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:05:21.800630    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:21.800641    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:21.823835    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:21.823841    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:21.828333    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:21.828340    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:21.866314    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:05:21.866329    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:05:21.880588    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:05:21.880598    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:05:24.397220    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:29.398052    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:29.398254    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:29.416363    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:05:29.416462    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:29.429872    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:05:29.429960    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:29.440972    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:05:29.441048    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:29.451702    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:05:29.451787    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:29.462383    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:05:29.462458    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:29.473136    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:05:29.473211    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:29.484087    5145 logs.go:282] 0 containers: []
	W1011 15:05:29.484099    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:29.484168    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:29.494657    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:05:29.494674    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:05:29.494679    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:05:29.505652    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:05:29.505665    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:05:29.523573    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:29.523583    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:29.548668    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:05:29.548675    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:05:29.563194    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:29.563209    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:05:29.603160    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:29.603171    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:29.637578    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:05:29.637591    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:05:29.651526    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:05:29.651539    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:05:29.665657    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:05:29.665666    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:05:29.680036    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:05:29.680045    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:29.692354    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:05:29.692366    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:05:29.717338    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:05:29.717347    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:05:29.728968    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:05:29.728978    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:05:29.741219    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:05:29.741228    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:05:29.752950    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:05:29.752959    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:05:29.764316    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:29.764330    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:29.768774    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:05:29.768781    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:05:32.280872    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:37.283092    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:37.283284    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:37.298373    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:05:37.298462    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:37.310563    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:05:37.310640    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:37.321077    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:05:37.321160    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:37.331235    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:05:37.331307    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:37.341775    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:05:37.341856    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:37.352344    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:05:37.352423    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:37.362460    5145 logs.go:282] 0 containers: []
	W1011 15:05:37.362470    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:37.362534    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:37.375235    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:05:37.375252    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:05:37.375257    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:05:37.392558    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:05:37.392567    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:05:37.403490    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:37.403500    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:37.427148    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:05:37.427157    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:05:37.438793    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:37.438802    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:37.474044    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:05:37.474054    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:05:37.489967    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:05:37.489978    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:05:37.500972    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:37.500983    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:37.505190    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:05:37.505199    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:05:37.519255    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:05:37.519264    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:05:37.544504    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:05:37.544516    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:05:37.558493    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:05:37.558505    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:05:37.572543    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:05:37.572556    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:05:37.583599    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:05:37.583611    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:05:37.595908    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:05:37.595920    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:37.610747    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:37.610758    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:05:37.651087    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:05:37.651096    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:05:40.164912    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:45.167191    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:45.167373    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:45.178593    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:05:45.178684    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:45.189637    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:05:45.189713    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:45.202050    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:05:45.202128    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:45.212873    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:05:45.212961    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:45.224636    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:05:45.224715    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:45.237974    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:05:45.238044    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:45.248398    5145 logs.go:282] 0 containers: []
	W1011 15:05:45.248411    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:45.248475    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:45.259090    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:05:45.259111    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:45.259117    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:45.295815    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:05:45.295826    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:05:45.310113    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:05:45.310127    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:45.322872    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:45.322886    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:05:45.362038    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:05:45.362049    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:05:45.391519    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:05:45.391532    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:05:45.404086    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:05:45.404098    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:05:45.418536    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:05:45.418545    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:05:45.432974    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:45.432984    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:45.455874    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:45.455880    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:45.460256    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:05:45.460262    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:05:45.475447    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:05:45.475457    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:05:45.487371    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:05:45.487384    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:05:45.506107    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:05:45.506118    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:05:45.520943    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:05:45.520957    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:05:45.531839    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:05:45.531850    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:05:45.543350    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:05:45.543364    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:05:48.063829    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:05:53.065992    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:05:53.066229    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:05:53.093553    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:05:53.093647    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:05:53.107515    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:05:53.107602    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:05:53.119431    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:05:53.119512    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:05:53.130183    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:05:53.130259    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:05:53.140747    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:05:53.140821    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:05:53.150862    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:05:53.150931    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:05:53.161212    5145 logs.go:282] 0 containers: []
	W1011 15:05:53.161226    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:05:53.161287    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:05:53.172588    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:05:53.172610    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:05:53.172615    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:05:53.184722    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:05:53.184732    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:05:53.200464    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:05:53.200477    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:05:53.212342    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:05:53.212354    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:05:53.230515    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:05:53.230523    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:05:53.241590    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:05:53.241601    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:05:53.245640    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:05:53.245649    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:05:53.257638    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:05:53.257649    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:05:53.273512    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:05:53.273521    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:05:53.290813    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:05:53.290823    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:05:53.303397    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:05:53.303408    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:05:53.326395    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:05:53.326404    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:05:53.363085    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:05:53.363096    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:05:53.388789    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:05:53.388802    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:05:53.402648    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:05:53.402657    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:05:53.437381    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:05:53.437391    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:05:53.454897    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:05:53.454907    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:05:55.971812    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:00.972820    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:00.973146    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:01.005318    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:06:01.005464    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:01.026704    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:06:01.026805    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:01.041296    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:06:01.041387    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:01.054776    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:06:01.054859    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:01.070406    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:06:01.070480    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:01.081901    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:06:01.081981    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:01.092522    5145 logs.go:282] 0 containers: []
	W1011 15:06:01.092534    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:01.092600    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:01.103477    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:06:01.103496    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:06:01.103501    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:06:01.117715    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:06:01.117725    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:06:01.129557    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:06:01.129573    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:06:01.155261    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:06:01.155271    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:06:01.167740    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:01.167751    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:01.190560    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:06:01.190571    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:06:01.205396    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:06:01.205410    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:06:01.220538    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:06:01.220551    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:06:01.234549    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:06:01.234559    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:06:01.249991    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:06:01.250002    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:06:01.261657    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:06:01.261669    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:06:01.273644    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:06:01.273656    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:01.285494    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:01.285504    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:01.289494    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:06:01.289501    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:06:01.300971    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:06:01.300981    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:06:01.325931    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:01.325941    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:01.361184    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:01.361197    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:06:03.900903    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:08.903254    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:08.903489    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:08.930505    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:06:08.930604    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:08.943628    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:06:08.943709    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:08.954518    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:06:08.954595    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:08.965090    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:06:08.965170    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:08.978922    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:06:08.978995    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:08.996667    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:06:08.996749    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:09.007088    5145 logs.go:282] 0 containers: []
	W1011 15:06:09.007099    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:09.007164    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:09.017927    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:06:09.017946    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:06:09.017952    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:06:09.029596    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:09.029611    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:09.035336    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:06:09.035344    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:06:09.046806    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:06:09.046819    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:06:09.069039    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:06:09.069049    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:06:09.086702    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:06:09.086715    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:06:09.098814    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:06:09.098825    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:06:09.110527    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:06:09.110537    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:06:09.125258    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:06:09.125268    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:06:09.157638    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:06:09.157649    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:09.173018    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:09.173031    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:06:09.210668    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:09.210679    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:09.245641    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:06:09.245652    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:06:09.260386    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:06:09.260396    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:06:09.285011    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:06:09.285023    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:06:09.298703    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:06:09.298715    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:06:09.313546    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:09.313558    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:11.837946    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:16.840217    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:16.840444    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:16.862021    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:06:16.862127    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:16.885742    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:06:16.885820    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:16.897432    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:06:16.897513    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:16.908462    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:06:16.908539    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:16.919275    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:06:16.919350    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:16.933607    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:06:16.933684    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:16.943792    5145 logs.go:282] 0 containers: []
	W1011 15:06:16.943804    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:16.943866    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:16.954165    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:06:16.954184    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:16.954190    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:16.958551    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:06:16.958563    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:06:16.970713    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:16.970724    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:16.995159    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:06:16.995167    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:17.007184    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:06:17.007194    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:06:17.020964    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:06:17.020978    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:06:17.037997    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:06:17.038009    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:06:17.053182    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:06:17.053193    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:06:17.065352    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:06:17.065363    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:06:17.083110    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:17.083120    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:17.117749    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:06:17.117764    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:06:17.142823    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:17.142834    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:06:17.182530    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:06:17.182538    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:06:17.201266    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:06:17.201276    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:06:17.218891    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:06:17.218906    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:06:17.235774    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:06:17.235784    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:06:17.248824    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:06:17.248834    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:06:19.762236    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:24.764553    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:24.764762    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:24.791368    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:06:24.791492    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:24.808660    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:06:24.808755    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:24.822711    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:06:24.822791    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:24.834392    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:06:24.834476    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:24.844504    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:06:24.844574    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:24.854613    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:06:24.854684    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:24.865249    5145 logs.go:282] 0 containers: []
	W1011 15:06:24.865265    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:24.865341    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:24.875786    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:06:24.875804    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:06:24.875811    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:06:24.904500    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:06:24.904510    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:06:24.915765    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:06:24.915776    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:06:24.928692    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:06:24.928706    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:06:24.940264    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:24.940272    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:06:24.980601    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:24.980616    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:25.005433    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:06:25.005445    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:25.017440    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:25.017450    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:25.021599    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:06:25.021605    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:06:25.047630    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:06:25.047644    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:06:25.066232    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:06:25.066247    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:06:25.082162    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:06:25.082173    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:06:25.093828    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:06:25.093838    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:06:25.111666    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:06:25.111676    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:06:25.122848    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:25.122859    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:25.165061    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:06:25.165071    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:06:25.180617    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:06:25.180629    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:06:27.695068    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:32.697443    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:32.697717    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:32.721939    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:06:32.722058    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:32.737461    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:06:32.737555    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:32.751067    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:06:32.751152    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:32.761621    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:06:32.761699    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:32.773313    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:06:32.773391    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:32.783985    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:06:32.784058    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:32.794904    5145 logs.go:282] 0 containers: []
	W1011 15:06:32.794915    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:32.794981    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:32.806021    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:06:32.806037    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:06:32.806042    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:06:32.817754    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:06:32.817763    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:06:32.832479    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:06:32.832494    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:06:32.846071    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:06:32.846082    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:06:32.857466    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:06:32.857475    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:06:32.868706    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:32.868716    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:32.905703    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:06:32.905714    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:06:32.931141    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:06:32.931154    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:06:32.945469    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:06:32.945478    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:06:32.960448    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:06:32.960458    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:06:32.978584    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:06:32.978596    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:06:32.991041    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:32.991051    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:32.995744    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:06:32.995753    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:06:33.013364    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:06:33.013375    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:06:33.030164    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:33.030175    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:33.054463    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:06:33.054472    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:33.066971    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:33.066981    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:06:35.606489    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:40.608805    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:40.608968    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:40.622930    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:06:40.623018    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:40.635597    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:06:40.635678    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:40.646272    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:06:40.646356    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:40.658236    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:06:40.658313    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:40.668465    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:06:40.668540    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:40.679224    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:06:40.679302    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:40.689807    5145 logs.go:282] 0 containers: []
	W1011 15:06:40.689818    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:40.689881    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:40.700412    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:06:40.700429    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:40.700435    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:40.735092    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:06:40.735106    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:06:40.747585    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:06:40.747598    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:06:40.761042    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:40.761052    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:40.784751    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:40.784760    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:06:40.823800    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:40.823808    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:40.828172    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:06:40.828180    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:06:40.852571    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:06:40.852581    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:06:40.866414    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:06:40.866424    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:06:40.877616    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:06:40.877631    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:06:40.889251    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:06:40.889265    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:06:40.906489    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:06:40.906498    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:06:40.920801    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:06:40.920811    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:40.932440    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:06:40.932454    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:06:40.946859    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:06:40.946870    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:06:40.961122    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:06:40.961132    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:06:40.977624    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:06:40.977634    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:06:43.491146    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:48.493422    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:48.493724    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:48.528922    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:06:48.529021    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:48.546752    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:06:48.546833    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:48.560517    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:06:48.560590    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:48.573114    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:06:48.573197    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:48.583329    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:06:48.583404    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:48.594009    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:06:48.594081    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:48.604491    5145 logs.go:282] 0 containers: []
	W1011 15:06:48.604501    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:48.604564    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:48.618549    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:06:48.618566    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:48.618570    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:06:48.656021    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:06:48.656030    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:06:48.670337    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:06:48.670347    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:06:48.687442    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:06:48.687453    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:06:48.700510    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:06:48.700520    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:06:48.714500    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:06:48.714512    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:06:48.738960    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:06:48.738971    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:06:48.755741    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:48.755753    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:48.791637    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:06:48.791648    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:06:48.808339    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:06:48.808351    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:06:48.819554    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:48.819568    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:48.842683    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:06:48.842692    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:48.854929    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:48.854941    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:48.859603    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:06:48.859612    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:06:48.871218    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:06:48.871228    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:06:48.883711    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:06:48.883722    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:06:48.895278    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:06:48.895290    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:06:51.409134    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:06:56.410260    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:06:56.410424    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:06:56.429088    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:06:56.429184    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:06:56.446918    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:06:56.447007    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:06:56.457968    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:06:56.458049    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:06:56.476891    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:06:56.476968    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:06:56.487835    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:06:56.487911    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:06:56.498843    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:06:56.498920    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:06:56.510163    5145 logs.go:282] 0 containers: []
	W1011 15:06:56.510175    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:06:56.510236    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:06:56.523335    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:06:56.523355    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:06:56.523359    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:06:56.544814    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:06:56.544823    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:06:56.556045    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:06:56.556058    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:06:56.568393    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:06:56.568407    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:06:56.580226    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:06:56.580239    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:06:56.594432    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:06:56.594441    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:06:56.608454    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:06:56.608468    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:06:56.620530    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:06:56.620543    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:06:56.635722    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:06:56.635734    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:06:56.650279    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:06:56.650287    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:06:56.668132    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:06:56.668146    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:06:56.702691    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:06:56.702700    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:06:56.728812    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:06:56.728822    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:06:56.740270    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:06:56.740284    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:06:56.751688    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:06:56.751699    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:06:56.775253    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:06:56.775262    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:06:56.814498    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:06:56.814506    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:06:59.321111    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:04.321604    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:04.321925    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:07:04.353033    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:07:04.353165    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:07:04.372043    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:07:04.372143    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:07:04.386263    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:07:04.386351    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:07:04.398402    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:07:04.398478    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:07:04.409431    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:07:04.409507    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:07:04.420918    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:07:04.421000    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:07:04.431235    5145 logs.go:282] 0 containers: []
	W1011 15:07:04.431247    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:07:04.431316    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:07:04.442318    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:07:04.442336    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:07:04.442343    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:07:04.460155    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:07:04.460166    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:07:04.472824    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:07:04.472837    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:07:04.499961    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:07:04.499975    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:07:04.514709    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:07:04.514722    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:07:04.526967    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:07:04.526977    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:07:04.549797    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:07:04.549807    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:07:04.554482    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:07:04.554488    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:07:04.591779    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:07:04.591792    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:07:04.606093    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:07:04.606103    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:07:04.617842    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:07:04.617856    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:07:04.629125    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:07:04.629141    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:07:04.642092    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:07:04.642103    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:07:04.658080    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:07:04.658090    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:07:04.670670    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:07:04.670682    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:07:04.711128    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:07:04.711144    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:07:04.725092    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:07:04.725101    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:07:07.241257    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:12.242801    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:12.243263    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:07:12.278922    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:07:12.279076    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:07:12.304746    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:07:12.304848    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:07:12.322109    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:07:12.322191    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:07:12.332719    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:07:12.332798    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:07:12.343724    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:07:12.343803    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:07:12.357366    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:07:12.357443    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:07:12.367589    5145 logs.go:282] 0 containers: []
	W1011 15:07:12.367599    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:07:12.367665    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:07:12.378210    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:07:12.378233    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:07:12.378239    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:07:12.389978    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:07:12.389988    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:07:12.403352    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:07:12.403361    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:07:12.415054    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:07:12.415064    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:07:12.419887    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:07:12.419893    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:07:12.453693    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:07:12.453706    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:07:12.468334    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:07:12.468344    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:07:12.479916    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:07:12.479926    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:07:12.491602    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:07:12.491611    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:07:12.519913    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:07:12.519926    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:07:12.534690    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:07:12.534700    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:07:12.549151    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:07:12.549161    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:07:12.562494    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:07:12.562504    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:07:12.579887    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:07:12.579896    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:07:12.597860    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:07:12.597871    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:07:12.637580    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:07:12.637594    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:07:12.655435    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:07:12.655447    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:07:15.187958    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:20.190544    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:20.190885    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:07:20.206184    5145 logs.go:282] 2 containers: [7d7bd85ab046 e5ff18c232f1]
	I1011 15:07:20.206286    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:07:20.219010    5145 logs.go:282] 2 containers: [86cbe0acf254 26a6947a1458]
	I1011 15:07:20.219090    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:07:20.230005    5145 logs.go:282] 1 containers: [7b5338879d88]
	I1011 15:07:20.230082    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:07:20.240107    5145 logs.go:282] 2 containers: [b9e1a2b02648 b001d59290a4]
	I1011 15:07:20.240177    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:07:20.260125    5145 logs.go:282] 1 containers: [67ae51b0fdf3]
	I1011 15:07:20.260195    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:07:20.270772    5145 logs.go:282] 2 containers: [a937c52e6d9d 3147d798970d]
	I1011 15:07:20.270843    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:07:20.281674    5145 logs.go:282] 0 containers: []
	W1011 15:07:20.281686    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:07:20.281753    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:07:20.292618    5145 logs.go:282] 2 containers: [500d2bd526c9 d4388f1e5798]
	I1011 15:07:20.292638    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:07:20.292643    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:07:20.329478    5145 logs.go:123] Gathering logs for coredns [7b5338879d88] ...
	I1011 15:07:20.329486    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5338879d88"
	I1011 15:07:20.345220    5145 logs.go:123] Gathering logs for kube-scheduler [b001d59290a4] ...
	I1011 15:07:20.345230    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b001d59290a4"
	I1011 15:07:20.360775    5145 logs.go:123] Gathering logs for kube-controller-manager [a937c52e6d9d] ...
	I1011 15:07:20.360790    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a937c52e6d9d"
	I1011 15:07:20.378876    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:07:20.378886    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:07:20.400652    5145 logs.go:123] Gathering logs for kube-apiserver [7d7bd85ab046] ...
	I1011 15:07:20.400659    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d7bd85ab046"
	I1011 15:07:20.414361    5145 logs.go:123] Gathering logs for kube-apiserver [e5ff18c232f1] ...
	I1011 15:07:20.414371    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5ff18c232f1"
	I1011 15:07:20.438659    5145 logs.go:123] Gathering logs for etcd [26a6947a1458] ...
	I1011 15:07:20.438672    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26a6947a1458"
	I1011 15:07:20.453616    5145 logs.go:123] Gathering logs for kube-scheduler [b9e1a2b02648] ...
	I1011 15:07:20.453625    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e1a2b02648"
	I1011 15:07:20.465642    5145 logs.go:123] Gathering logs for kube-controller-manager [3147d798970d] ...
	I1011 15:07:20.465652    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3147d798970d"
	I1011 15:07:20.478364    5145 logs.go:123] Gathering logs for storage-provisioner [d4388f1e5798] ...
	I1011 15:07:20.478375    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4388f1e5798"
	I1011 15:07:20.490363    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:07:20.490374    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:07:20.495164    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:07:20.495173    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:07:20.531478    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:07:20.531489    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:07:20.544115    5145 logs.go:123] Gathering logs for etcd [86cbe0acf254] ...
	I1011 15:07:20.544124    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86cbe0acf254"
	I1011 15:07:20.558656    5145 logs.go:123] Gathering logs for kube-proxy [67ae51b0fdf3] ...
	I1011 15:07:20.558665    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67ae51b0fdf3"
	I1011 15:07:20.570424    5145 logs.go:123] Gathering logs for storage-provisioner [500d2bd526c9] ...
	I1011 15:07:20.570435    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500d2bd526c9"
	I1011 15:07:23.084150    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:28.086158    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:28.086207    5145 kubeadm.go:597] duration metric: took 4m3.760081333s to restartPrimaryControlPlane
	W1011 15:07:28.086255    5145 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1011 15:07:28.086271    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1011 15:07:29.125727    5145 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.0394575s)
	I1011 15:07:29.125831    5145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 15:07:29.130965    5145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 15:07:29.134071    5145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 15:07:29.136925    5145 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 15:07:29.136930    5145 kubeadm.go:157] found existing configuration files:
	
	I1011 15:07:29.136974    5145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/admin.conf
	I1011 15:07:29.139522    5145 kubeadm.go:163] "https://control-plane.minikube.internal:57470" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 15:07:29.139548    5145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 15:07:29.142181    5145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/kubelet.conf
	I1011 15:07:29.145539    5145 kubeadm.go:163] "https://control-plane.minikube.internal:57470" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 15:07:29.145566    5145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 15:07:29.148240    5145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/controller-manager.conf
	I1011 15:07:29.150971    5145 kubeadm.go:163] "https://control-plane.minikube.internal:57470" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 15:07:29.150998    5145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 15:07:29.154138    5145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/scheduler.conf
	I1011 15:07:29.156826    5145 kubeadm.go:163] "https://control-plane.minikube.internal:57470" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:57470 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 15:07:29.156851    5145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 15:07:29.159369    5145 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1011 15:07:29.176417    5145 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1011 15:07:29.176457    5145 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 15:07:29.234643    5145 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 15:07:29.234736    5145 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 15:07:29.234787    5145 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1011 15:07:29.283682    5145 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 15:07:29.287872    5145 out.go:235]   - Generating certificates and keys ...
	I1011 15:07:29.287904    5145 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 15:07:29.287933    5145 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 15:07:29.287973    5145 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1011 15:07:29.288029    5145 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1011 15:07:29.288065    5145 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1011 15:07:29.288088    5145 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1011 15:07:29.288120    5145 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1011 15:07:29.288149    5145 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1011 15:07:29.288199    5145 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1011 15:07:29.288235    5145 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1011 15:07:29.288254    5145 kubeadm.go:310] [certs] Using the existing "sa" key
	I1011 15:07:29.288284    5145 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 15:07:29.326627    5145 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 15:07:29.510878    5145 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 15:07:29.548553    5145 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 15:07:29.599617    5145 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 15:07:29.627090    5145 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 15:07:29.627481    5145 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 15:07:29.627547    5145 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 15:07:29.715380    5145 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 15:07:29.719561    5145 out.go:235]   - Booting up control plane ...
	I1011 15:07:29.719601    5145 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 15:07:29.719639    5145 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 15:07:29.719690    5145 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 15:07:29.719737    5145 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 15:07:29.719819    5145 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1011 15:07:34.220149    5145 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501934 seconds
	I1011 15:07:34.220240    5145 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 15:07:34.224322    5145 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 15:07:34.733416    5145 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 15:07:34.733526    5145 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-583000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 15:07:35.237508    5145 kubeadm.go:310] [bootstrap-token] Using token: q96muf.2a0odtdr2nd5iza9
	I1011 15:07:35.242650    5145 out.go:235]   - Configuring RBAC rules ...
	I1011 15:07:35.242706    5145 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 15:07:35.242753    5145 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 15:07:35.249417    5145 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 15:07:35.250444    5145 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 15:07:35.251534    5145 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 15:07:35.252578    5145 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 15:07:35.256587    5145 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 15:07:35.441660    5145 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 15:07:35.641848    5145 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 15:07:35.642414    5145 kubeadm.go:310] 
	I1011 15:07:35.642450    5145 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 15:07:35.642455    5145 kubeadm.go:310] 
	I1011 15:07:35.642492    5145 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 15:07:35.642496    5145 kubeadm.go:310] 
	I1011 15:07:35.642508    5145 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 15:07:35.642558    5145 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 15:07:35.642621    5145 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 15:07:35.642626    5145 kubeadm.go:310] 
	I1011 15:07:35.642685    5145 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 15:07:35.642691    5145 kubeadm.go:310] 
	I1011 15:07:35.642723    5145 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 15:07:35.642727    5145 kubeadm.go:310] 
	I1011 15:07:35.642752    5145 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 15:07:35.642809    5145 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 15:07:35.642858    5145 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 15:07:35.642862    5145 kubeadm.go:310] 
	I1011 15:07:35.642921    5145 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 15:07:35.642978    5145 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 15:07:35.642987    5145 kubeadm.go:310] 
	I1011 15:07:35.643026    5145 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token q96muf.2a0odtdr2nd5iza9 \
	I1011 15:07:35.643144    5145 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ff7372af64c3996e800eaf522c3eb51c544993254bf1d45ae249aa6259e8117f \
	I1011 15:07:35.643156    5145 kubeadm.go:310] 	--control-plane 
	I1011 15:07:35.643158    5145 kubeadm.go:310] 
	I1011 15:07:35.643254    5145 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 15:07:35.643261    5145 kubeadm.go:310] 
	I1011 15:07:35.643331    5145 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token q96muf.2a0odtdr2nd5iza9 \
	I1011 15:07:35.643396    5145 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ff7372af64c3996e800eaf522c3eb51c544993254bf1d45ae249aa6259e8117f 
	I1011 15:07:35.643527    5145 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 15:07:35.643604    5145 cni.go:84] Creating CNI manager for ""
	I1011 15:07:35.643613    5145 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 15:07:35.647576    5145 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 15:07:35.655554    5145 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 15:07:35.658528    5145 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1011 15:07:35.663390    5145 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 15:07:35.663445    5145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 15:07:35.663483    5145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-583000 minikube.k8s.io/updated_at=2024_10_11T15_07_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=stopped-upgrade-583000 minikube.k8s.io/primary=true
	I1011 15:07:35.706893    5145 ops.go:34] apiserver oom_adj: -16
	I1011 15:07:35.706963    5145 kubeadm.go:1113] duration metric: took 43.562667ms to wait for elevateKubeSystemPrivileges
	I1011 15:07:35.706973    5145 kubeadm.go:394] duration metric: took 4m11.394537792s to StartCluster
	I1011 15:07:35.706982    5145 settings.go:142] acquiring lock: {Name:mka75dc1604295e2b491b48ad476a4c06f6cece7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:07:35.707080    5145 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:07:35.707525    5145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/kubeconfig: {Name:mkc848521291f94f61a80272f8eb43a8779805e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:07:35.707749    5145 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:07:35.707765    5145 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 15:07:35.707799    5145 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-583000"
	I1011 15:07:35.707823    5145 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-583000"
	I1011 15:07:35.707829    5145 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-583000"
	I1011 15:07:35.707848    5145 config.go:182] Loaded profile config "stopped-upgrade-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 15:07:35.707858    5145 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-583000"
	W1011 15:07:35.707863    5145 addons.go:243] addon storage-provisioner should already be in state true
	I1011 15:07:35.707878    5145 host.go:66] Checking if "stopped-upgrade-583000" exists ...
	I1011 15:07:35.709012    5145 kapi.go:59] client config for stopped-upgrade-583000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/stopped-upgrade-583000/client.key", CAFile:"/Users/jenkins/minikube-integration/19749-1186/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101f7ee40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1011 15:07:35.709139    5145 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-583000"
	W1011 15:07:35.709144    5145 addons.go:243] addon default-storageclass should already be in state true
	I1011 15:07:35.709156    5145 host.go:66] Checking if "stopped-upgrade-583000" exists ...
	I1011 15:07:35.710537    5145 out.go:177] * Verifying Kubernetes components...
	I1011 15:07:35.710840    5145 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 15:07:35.714651    5145 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 15:07:35.714659    5145 sshutil.go:53] new ssh client: &{IP:localhost Port:57437 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/id_rsa Username:docker}
	I1011 15:07:35.718490    5145 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 15:07:35.721614    5145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 15:07:35.724609    5145 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 15:07:35.724615    5145 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 15:07:35.724621    5145 sshutil.go:53] new ssh client: &{IP:localhost Port:57437 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/stopped-upgrade-583000/id_rsa Username:docker}
	I1011 15:07:35.805557    5145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 15:07:35.811148    5145 api_server.go:52] waiting for apiserver process to appear ...
	I1011 15:07:35.811212    5145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 15:07:35.815265    5145 api_server.go:72] duration metric: took 107.503709ms to wait for apiserver process to appear ...
	I1011 15:07:35.815274    5145 api_server.go:88] waiting for apiserver healthz status ...
	I1011 15:07:35.815281    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:35.861837    5145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 15:07:35.904350    5145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 15:07:36.209287    5145 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1011 15:07:36.209298    5145 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1011 15:07:40.817273    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:40.817294    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:45.817649    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:45.817684    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:50.817996    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:50.818020    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:07:55.818422    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:07:55.818471    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:08:00.819135    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:00.819161    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:08:05.819882    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:05.819911    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1011 15:08:06.211180    5145 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1011 15:08:06.215414    5145 out.go:177] * Enabled addons: storage-provisioner
	I1011 15:08:06.223399    5145 addons.go:510] duration metric: took 30.51612475s for enable addons: enabled=[storage-provisioner]
	I1011 15:08:10.820887    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:10.820926    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:08:15.822211    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:15.822254    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:08:20.824017    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:20.824100    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:08:25.826460    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:25.826500    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:08:30.827013    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:30.827078    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:08:35.829381    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:35.829602    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:08:35.849375    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:08:35.849480    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:08:35.884566    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:08:35.884647    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:08:35.898882    5145 logs.go:282] 2 containers: [9fca17df288b f7976848cbf8]
	I1011 15:08:35.898970    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:08:35.909702    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:08:35.909831    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:08:35.920435    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:08:35.920519    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:08:35.931462    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:08:35.931533    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:08:35.943806    5145 logs.go:282] 0 containers: []
	W1011 15:08:35.943816    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:08:35.943882    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:08:35.954173    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:08:35.954184    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:08:35.954189    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:08:35.958695    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:08:35.958702    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:08:35.973038    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:08:35.973047    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:08:35.988683    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:08:35.988697    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:08:36.001086    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:08:36.001095    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:08:36.012943    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:08:36.012952    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:08:36.052604    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:08:36.052612    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:08:36.090692    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:08:36.090707    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:08:36.105171    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:08:36.105183    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:08:36.117116    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:08:36.117130    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:08:36.129232    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:08:36.129241    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:08:36.146128    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:08:36.146139    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:08:36.157666    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:08:36.157676    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:08:38.682834    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:08:43.685475    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:43.685730    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:08:43.706062    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:08:43.706165    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:08:43.721132    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:08:43.721219    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:08:43.733424    5145 logs.go:282] 2 containers: [9fca17df288b f7976848cbf8]
	I1011 15:08:43.733509    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:08:43.744759    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:08:43.744835    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:08:43.759431    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:08:43.759517    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:08:43.770306    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:08:43.770375    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:08:43.780885    5145 logs.go:282] 0 containers: []
	W1011 15:08:43.780896    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:08:43.780954    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:08:43.791461    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:08:43.791475    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:08:43.791480    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:08:43.803084    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:08:43.803095    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:08:43.818478    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:08:43.818489    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:08:43.829971    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:08:43.829983    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:08:43.844243    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:08:43.844256    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:08:43.858272    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:08:43.858283    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:08:43.876357    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:08:43.876370    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:08:43.890067    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:08:43.890079    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:08:43.908423    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:08:43.908433    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:08:43.948322    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:08:43.948333    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:08:43.952830    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:08:43.952839    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:08:43.990290    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:08:43.990303    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:08:44.013793    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:08:44.013804    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:08:46.526920    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:08:51.529431    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:51.529846    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:08:51.573342    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:08:51.573472    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:08:51.592985    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:08:51.593073    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:08:51.605037    5145 logs.go:282] 2 containers: [9fca17df288b f7976848cbf8]
	I1011 15:08:51.605120    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:08:51.619161    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:08:51.619240    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:08:51.630263    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:08:51.630341    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:08:51.641128    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:08:51.641205    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:08:51.651343    5145 logs.go:282] 0 containers: []
	W1011 15:08:51.651357    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:08:51.651422    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:08:51.661883    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:08:51.661902    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:08:51.661910    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:08:51.675074    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:08:51.675087    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:08:51.687331    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:08:51.687344    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:08:51.703036    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:08:51.703049    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:08:51.715008    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:08:51.715018    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:08:51.732365    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:08:51.732377    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:08:51.744055    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:08:51.744069    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:08:51.759487    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:08:51.759499    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:08:51.784781    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:08:51.784791    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:08:51.824197    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:08:51.824203    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:08:51.828307    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:08:51.828314    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:08:51.864172    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:08:51.864182    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:08:51.884140    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:08:51.884150    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:08:54.400545    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:08:59.403455    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:08:59.404019    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:08:59.442757    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:08:59.442919    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:08:59.465513    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:08:59.465630    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:08:59.480985    5145 logs.go:282] 2 containers: [9fca17df288b f7976848cbf8]
	I1011 15:08:59.481062    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:08:59.493539    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:08:59.493609    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:08:59.504570    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:08:59.504648    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:08:59.515226    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:08:59.515302    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:08:59.526378    5145 logs.go:282] 0 containers: []
	W1011 15:08:59.526390    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:08:59.526451    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:08:59.537209    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:08:59.537225    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:08:59.537230    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:08:59.553284    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:08:59.553297    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:08:59.571912    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:08:59.571924    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:08:59.608661    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:08:59.608671    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:08:59.613318    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:08:59.613327    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:08:59.647206    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:08:59.647218    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:08:59.658608    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:08:59.658618    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:08:59.670831    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:08:59.670844    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:08:59.682347    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:08:59.682357    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:08:59.696900    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:08:59.696912    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:08:59.711360    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:08:59.711370    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:08:59.729446    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:08:59.729457    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:08:59.740770    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:08:59.740781    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:09:02.265864    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:09:07.268239    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:09:07.268447    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:09:07.283118    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:09:07.283204    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:09:07.294128    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:09:07.294200    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:09:07.304724    5145 logs.go:282] 2 containers: [9fca17df288b f7976848cbf8]
	I1011 15:09:07.304803    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:09:07.315580    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:09:07.315655    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:09:07.326094    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:09:07.326167    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:09:07.336811    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:09:07.336888    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:09:07.347207    5145 logs.go:282] 0 containers: []
	W1011 15:09:07.347218    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:09:07.347281    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:09:07.358767    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:09:07.358790    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:09:07.358795    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:09:07.397499    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:09:07.397507    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:09:07.401548    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:09:07.401557    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:09:07.416948    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:09:07.416960    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:09:07.427911    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:09:07.427922    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:09:07.450866    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:09:07.450873    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:09:07.462432    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:09:07.462445    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:09:07.480799    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:09:07.480812    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:09:07.492448    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:09:07.492458    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:09:07.530875    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:09:07.530891    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:09:07.547462    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:09:07.547475    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:09:07.563662    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:09:07.563681    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:09:07.577863    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:09:07.577878    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:09:10.093055    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:09:15.093802    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:09:15.094365    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:09:15.135739    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:09:15.135887    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:09:15.158368    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:09:15.158458    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:09:15.174479    5145 logs.go:282] 2 containers: [9fca17df288b f7976848cbf8]
	I1011 15:09:15.174563    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:09:15.187475    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:09:15.187563    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:09:15.198995    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:09:15.199053    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:09:15.210480    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:09:15.210567    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:09:15.227460    5145 logs.go:282] 0 containers: []
	W1011 15:09:15.227472    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:09:15.227519    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:09:15.239533    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:09:15.239550    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:09:15.239556    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:09:15.252255    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:09:15.252266    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:09:15.271691    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:09:15.271702    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:09:15.296421    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:09:15.296430    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:09:15.308317    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:09:15.308325    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:09:15.345970    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:09:15.345983    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:09:15.350284    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:09:15.350290    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:09:15.384769    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:09:15.384780    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:09:15.399959    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:09:15.399969    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:09:15.411672    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:09:15.411681    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:09:15.435048    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:09:15.435058    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:09:15.449858    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:09:15.449867    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:09:15.463295    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:09:15.463306    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:09:17.976655    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:09:22.979353    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:09:22.979785    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:09:23.018559    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:09:23.018696    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:09:23.037967    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:09:23.038070    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:09:23.051270    5145 logs.go:282] 2 containers: [9fca17df288b f7976848cbf8]
	I1011 15:09:23.051349    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:09:23.062628    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:09:23.062705    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:09:23.073308    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:09:23.073382    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:09:23.083696    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:09:23.083772    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:09:23.094654    5145 logs.go:282] 0 containers: []
	W1011 15:09:23.094668    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:09:23.094732    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:09:23.105391    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:09:23.105407    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:09:23.105412    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:09:23.144576    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:09:23.144584    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:09:23.149634    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:09:23.149642    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:09:23.161453    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:09:23.161468    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:09:23.175839    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:09:23.175854    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:09:23.193819    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:09:23.193831    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:09:23.205274    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:09:23.205285    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:09:23.239932    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:09:23.239947    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:09:23.254275    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:09:23.254288    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:09:23.268127    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:09:23.268139    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:09:23.292330    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:09:23.292344    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:09:23.310212    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:09:23.310222    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:09:23.325523    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:09:23.325537    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:09:25.851033    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:09:30.853371    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:09:30.853901    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:09:30.892641    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:09:30.892804    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:09:30.913106    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:09:30.913230    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:09:30.928639    5145 logs.go:282] 2 containers: [9fca17df288b f7976848cbf8]
	I1011 15:09:30.928727    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:09:30.943903    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:09:30.943981    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:09:30.954852    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:09:30.954929    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:09:30.965507    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:09:30.965575    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:09:30.976965    5145 logs.go:282] 0 containers: []
	W1011 15:09:30.976976    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:09:30.977030    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:09:30.994186    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:09:30.994210    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:09:30.994215    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:09:31.030891    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:09:31.030900    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:09:31.034942    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:09:31.034950    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:09:31.049863    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:09:31.049876    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:09:31.066778    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:09:31.066788    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:09:31.078739    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:09:31.078752    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:09:31.094009    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:09:31.094022    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:09:31.105938    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:09:31.105950    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:09:31.123446    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:09:31.123455    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:09:31.148067    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:09:31.148074    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:09:31.185622    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:09:31.185633    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:09:31.197655    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:09:31.197667    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:09:31.209573    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:09:31.209584    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:09:33.723471    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:09:38.726284    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:09:38.726812    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:09:38.767042    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:09:38.767205    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:09:38.788376    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:09:38.788498    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:09:38.803523    5145 logs.go:282] 2 containers: [9fca17df288b f7976848cbf8]
	I1011 15:09:38.803615    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:09:38.815879    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:09:38.815965    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:09:38.826608    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:09:38.826687    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:09:38.837633    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:09:38.837709    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:09:38.851860    5145 logs.go:282] 0 containers: []
	W1011 15:09:38.851875    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:09:38.851932    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:09:38.866662    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:09:38.866677    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:09:38.866683    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:09:38.903349    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:09:38.903356    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:09:38.907607    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:09:38.907612    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:09:38.921499    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:09:38.921510    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:09:38.939601    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:09:38.939614    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:09:38.964269    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:09:38.964277    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:09:38.975849    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:09:38.975860    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:09:38.987569    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:09:38.987580    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:09:39.022572    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:09:39.022583    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:09:39.038750    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:09:39.038766    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:09:39.051041    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:09:39.051058    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:09:39.067034    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:09:39.067044    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:09:39.078665    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:09:39.078675    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:09:41.606157    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:09:46.608299    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:09:46.608418    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:09:46.619893    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:09:46.619972    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:09:46.630779    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:09:46.630858    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:09:46.641184    5145 logs.go:282] 3 containers: [5d497f10eaf9 9fca17df288b f7976848cbf8]
	I1011 15:09:46.641258    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:09:46.655268    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:09:46.655351    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:09:46.676420    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:09:46.676498    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:09:46.698577    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:09:46.698667    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:09:46.713770    5145 logs.go:282] 0 containers: []
	W1011 15:09:46.713782    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:09:46.713853    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:09:46.739750    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:09:46.739774    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:09:46.739780    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:09:46.754425    5145 logs.go:123] Gathering logs for coredns [5d497f10eaf9] ...
	I1011 15:09:46.754435    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d497f10eaf9"
	I1011 15:09:46.765446    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:09:46.765464    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:09:46.779426    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:09:46.779440    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:09:46.791366    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:09:46.791379    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:09:46.830389    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:09:46.830396    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:09:46.846925    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:09:46.846937    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:09:46.866526    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:09:46.866538    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:09:46.880469    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:09:46.880482    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:09:46.892081    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:09:46.892093    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:09:46.903201    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:09:46.903211    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:09:46.928221    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:09:46.928228    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:09:46.932928    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:09:46.932936    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:09:46.946399    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:09:46.946411    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:09:49.482530    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:09:54.483403    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:09:54.483627    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:09:54.513888    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:09:54.514029    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:09:54.531100    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:09:54.531206    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:09:54.545164    5145 logs.go:282] 4 containers: [ad656429f4d3 5d497f10eaf9 9fca17df288b f7976848cbf8]
	I1011 15:09:54.545249    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:09:54.556885    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:09:54.556957    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:09:54.567459    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:09:54.567535    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:09:54.577711    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:09:54.577779    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:09:54.588215    5145 logs.go:282] 0 containers: []
	W1011 15:09:54.588225    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:09:54.588279    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:09:54.598941    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:09:54.598959    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:09:54.598965    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:09:54.633086    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:09:54.633095    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:09:54.647267    5145 logs.go:123] Gathering logs for coredns [5d497f10eaf9] ...
	I1011 15:09:54.647278    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d497f10eaf9"
	I1011 15:09:54.658119    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:09:54.658131    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:09:54.674191    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:09:54.674204    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:09:54.692064    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:09:54.692076    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:09:54.728759    5145 logs.go:123] Gathering logs for coredns [ad656429f4d3] ...
	I1011 15:09:54.728767    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad656429f4d3"
	I1011 15:09:54.740434    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:09:54.740446    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:09:54.752271    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:09:54.752282    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:09:54.767954    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:09:54.767966    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:09:54.772815    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:09:54.772824    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:09:54.784384    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:09:54.784396    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:09:54.796303    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:09:54.796315    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:09:54.813437    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:09:54.813450    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:09:54.825710    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:09:54.825721    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:09:57.350786    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:10:02.353683    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:10:02.354232    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:10:02.394917    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:10:02.395076    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:10:02.419772    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:10:02.419897    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:10:02.435127    5145 logs.go:282] 4 containers: [ad656429f4d3 5d497f10eaf9 9fca17df288b f7976848cbf8]
	I1011 15:10:02.435211    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:10:02.447360    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:10:02.447439    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:10:02.457680    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:10:02.457753    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:10:02.468635    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:10:02.468707    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:10:02.479013    5145 logs.go:282] 0 containers: []
	W1011 15:10:02.479023    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:10:02.479077    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:10:02.493885    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:10:02.493904    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:10:02.493911    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:10:02.506455    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:10:02.506466    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:10:02.518316    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:10:02.518330    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:10:02.530320    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:10:02.530329    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:10:02.534539    5145 logs.go:123] Gathering logs for coredns [ad656429f4d3] ...
	I1011 15:10:02.534548    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad656429f4d3"
	I1011 15:10:02.546809    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:10:02.546820    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:10:02.559286    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:10:02.559296    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:10:02.571861    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:10:02.571873    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:10:02.611372    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:10:02.611382    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:10:02.648893    5145 logs.go:123] Gathering logs for coredns [5d497f10eaf9] ...
	I1011 15:10:02.648906    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d497f10eaf9"
	I1011 15:10:02.661245    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:10:02.661255    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:10:02.677247    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:10:02.677259    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:10:02.691436    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:10:02.691448    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:10:02.707037    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:10:02.707047    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:10:02.729095    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:10:02.729105    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:10:05.256649    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:10:10.258992    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:10:10.259070    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:10:10.270661    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:10:10.270725    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:10:10.281235    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:10:10.281305    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:10:10.293121    5145 logs.go:282] 4 containers: [ad656429f4d3 5d497f10eaf9 9fca17df288b f7976848cbf8]
	I1011 15:10:10.293194    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:10:10.305196    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:10:10.305256    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:10:10.316170    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:10:10.316232    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:10:10.330703    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:10:10.330778    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:10:10.341968    5145 logs.go:282] 0 containers: []
	W1011 15:10:10.341978    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:10:10.342041    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:10:10.354571    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:10:10.354590    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:10:10.354597    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:10:10.367189    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:10:10.367197    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:10:10.379138    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:10:10.379150    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:10:10.391893    5145 logs.go:123] Gathering logs for coredns [ad656429f4d3] ...
	I1011 15:10:10.391904    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad656429f4d3"
	I1011 15:10:10.404600    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:10:10.404612    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:10:10.421152    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:10:10.421163    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:10:10.436405    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:10:10.436417    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:10:10.441390    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:10:10.441404    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:10:10.478429    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:10:10.478439    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:10:10.493922    5145 logs.go:123] Gathering logs for coredns [5d497f10eaf9] ...
	I1011 15:10:10.493937    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d497f10eaf9"
	I1011 15:10:10.506448    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:10:10.506457    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:10:10.519008    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:10:10.519018    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:10:10.532448    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:10:10.532460    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:10:10.551738    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:10:10.551746    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:10:10.590520    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:10:10.590537    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:10:13.117134    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:10:18.119644    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:10:18.120187    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:10:18.166868    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:10:18.167024    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:10:18.191285    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:10:18.191382    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:10:18.205409    5145 logs.go:282] 4 containers: [ad656429f4d3 5d497f10eaf9 9fca17df288b f7976848cbf8]
	I1011 15:10:18.205491    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:10:18.216989    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:10:18.217065    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:10:18.227487    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:10:18.227555    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:10:18.238346    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:10:18.238423    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:10:18.248720    5145 logs.go:282] 0 containers: []
	W1011 15:10:18.248730    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:10:18.248813    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:10:18.262540    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:10:18.262557    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:10:18.262563    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:10:18.278001    5145 logs.go:123] Gathering logs for coredns [ad656429f4d3] ...
	I1011 15:10:18.278014    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad656429f4d3"
	I1011 15:10:18.289236    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:10:18.289250    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:10:18.300895    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:10:18.300907    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:10:18.314793    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:10:18.314804    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:10:18.326026    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:10:18.326036    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:10:18.330063    5145 logs.go:123] Gathering logs for coredns [5d497f10eaf9] ...
	I1011 15:10:18.330072    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d497f10eaf9"
	I1011 15:10:18.341490    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:10:18.341499    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:10:18.356526    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:10:18.356538    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:10:18.368077    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:10:18.368086    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:10:18.386152    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:10:18.386160    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:10:18.410804    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:10:18.410812    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:10:18.448798    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:10:18.448805    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:10:18.460403    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:10:18.460416    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:10:18.472429    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:10:18.472440    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:10:21.010595    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:10:26.013376    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:10:26.013576    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:10:26.031455    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:10:26.031543    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:10:26.043553    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:10:26.043627    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:10:26.054985    5145 logs.go:282] 4 containers: [ad656429f4d3 5d497f10eaf9 9fca17df288b f7976848cbf8]
	I1011 15:10:26.055059    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:10:26.070326    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:10:26.070414    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:10:26.084218    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:10:26.084299    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:10:26.094907    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:10:26.094982    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:10:26.105096    5145 logs.go:282] 0 containers: []
	W1011 15:10:26.105111    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:10:26.105174    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:10:26.115552    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:10:26.115580    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:10:26.115586    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:10:26.120446    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:10:26.120455    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:10:26.134479    5145 logs.go:123] Gathering logs for coredns [ad656429f4d3] ...
	I1011 15:10:26.134490    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad656429f4d3"
	I1011 15:10:26.146168    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:10:26.146181    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:10:26.157855    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:10:26.157866    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:10:26.169439    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:10:26.169452    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:10:26.211489    5145 logs.go:123] Gathering logs for coredns [5d497f10eaf9] ...
	I1011 15:10:26.211500    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d497f10eaf9"
	I1011 15:10:26.223445    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:10:26.223457    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:10:26.238626    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:10:26.238639    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:10:26.250136    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:10:26.250148    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:10:26.267196    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:10:26.267205    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:10:26.279324    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:10:26.279334    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:10:26.316698    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:10:26.316705    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:10:26.333873    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:10:26.333887    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:10:26.359545    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:10:26.359557    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:10:28.873275    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:10:33.875665    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:10:33.875766    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:10:33.887654    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:10:33.887728    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:10:33.901577    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:10:33.901658    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:10:33.913613    5145 logs.go:282] 4 containers: [ad656429f4d3 5d497f10eaf9 9fca17df288b f7976848cbf8]
	I1011 15:10:33.913709    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:10:33.925510    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:10:33.925599    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:10:33.942616    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:10:33.942691    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:10:33.959000    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:10:33.959090    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:10:33.972110    5145 logs.go:282] 0 containers: []
	W1011 15:10:33.972124    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:10:33.972177    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:10:33.983777    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:10:33.983796    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:10:33.983802    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:10:33.996679    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:10:33.996691    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:10:34.013485    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:10:34.013501    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:10:34.028480    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:10:34.028492    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:10:34.043572    5145 logs.go:123] Gathering logs for coredns [5d497f10eaf9] ...
	I1011 15:10:34.043589    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d497f10eaf9"
	I1011 15:10:34.056554    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:10:34.056565    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:10:34.075869    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:10:34.075881    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:10:34.102583    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:10:34.102600    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:10:34.144968    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:10:34.144981    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:10:34.149748    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:10:34.149758    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:10:34.165454    5145 logs.go:123] Gathering logs for coredns [ad656429f4d3] ...
	I1011 15:10:34.165468    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad656429f4d3"
	I1011 15:10:34.178617    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:10:34.178626    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:10:34.191542    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:10:34.191554    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:10:34.204260    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:10:34.204270    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:10:34.241263    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:10:34.241276    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:10:36.756182    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:10:41.758487    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:10:41.758988    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:10:41.794019    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:10:41.794169    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:10:41.814203    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:10:41.814296    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:10:41.837398    5145 logs.go:282] 4 containers: [ad656429f4d3 5d497f10eaf9 9fca17df288b f7976848cbf8]
	I1011 15:10:41.837478    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:10:41.849329    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:10:41.849404    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:10:41.860594    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:10:41.860659    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:10:41.871572    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:10:41.871635    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:10:41.881864    5145 logs.go:282] 0 containers: []
	W1011 15:10:41.881875    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:10:41.881940    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:10:41.893305    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:10:41.893324    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:10:41.893329    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:10:41.905034    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:10:41.905055    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:10:41.940916    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:10:41.940926    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:10:41.955178    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:10:41.955190    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:10:41.980875    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:10:41.980881    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:10:41.992766    5145 logs.go:123] Gathering logs for coredns [5d497f10eaf9] ...
	I1011 15:10:41.992776    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d497f10eaf9"
	I1011 15:10:42.004810    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:10:42.004823    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:10:42.017980    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:10:42.017993    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:10:42.040033    5145 logs.go:123] Gathering logs for coredns [ad656429f4d3] ...
	I1011 15:10:42.040043    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad656429f4d3"
	I1011 15:10:42.052256    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:10:42.052270    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:10:42.064248    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:10:42.064258    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:10:42.080384    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:10:42.080394    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:10:42.092563    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:10:42.092573    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:10:42.129236    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:10:42.129245    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:10:42.133162    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:10:42.133170    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:10:44.649453    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:10:49.652331    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:10:49.652940    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:10:49.690049    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:10:49.690200    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:10:49.711107    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:10:49.711216    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:10:49.725565    5145 logs.go:282] 4 containers: [ad656429f4d3 5d497f10eaf9 9fca17df288b f7976848cbf8]
	I1011 15:10:49.725663    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:10:49.738163    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:10:49.738235    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:10:49.749030    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:10:49.749110    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:10:49.759958    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:10:49.760032    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:10:49.770454    5145 logs.go:282] 0 containers: []
	W1011 15:10:49.770464    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:10:49.770531    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:10:49.780807    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:10:49.780823    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:10:49.780829    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:10:49.819491    5145 logs.go:123] Gathering logs for coredns [5d497f10eaf9] ...
	I1011 15:10:49.819497    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d497f10eaf9"
	I1011 15:10:49.831793    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:10:49.831806    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:10:49.843733    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:10:49.843745    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:10:49.868703    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:10:49.868717    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:10:49.886492    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:10:49.886502    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:10:49.911458    5145 logs.go:123] Gathering logs for coredns [ad656429f4d3] ...
	I1011 15:10:49.911464    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad656429f4d3"
	I1011 15:10:49.923153    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:10:49.923165    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:10:49.935345    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:10:49.935360    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:10:49.946802    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:10:49.946816    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:10:49.959213    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:10:49.959225    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:10:49.964093    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:10:49.964102    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:10:49.998564    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:10:49.998578    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:10:50.017283    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:10:50.017295    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:10:50.031694    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:10:50.031706    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:10:52.545937    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:10:57.548663    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:10:57.548933    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:10:57.572479    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:10:57.572600    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:10:57.588500    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:10:57.588579    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:10:57.601076    5145 logs.go:282] 4 containers: [ad656429f4d3 5d497f10eaf9 9fca17df288b f7976848cbf8]
	I1011 15:10:57.601156    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:10:57.612214    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:10:57.612292    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:10:57.622772    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:10:57.622839    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:10:57.632657    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:10:57.632724    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:10:57.642626    5145 logs.go:282] 0 containers: []
	W1011 15:10:57.642640    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:10:57.642698    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:10:57.658160    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:10:57.658181    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:10:57.658186    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:10:57.675983    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:10:57.675992    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:10:57.700397    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:10:57.700404    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:10:57.712775    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:10:57.712785    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:10:57.750083    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:10:57.750094    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:10:57.762005    5145 logs.go:123] Gathering logs for coredns [ad656429f4d3] ...
	I1011 15:10:57.762015    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad656429f4d3"
	I1011 15:10:57.773601    5145 logs.go:123] Gathering logs for coredns [5d497f10eaf9] ...
	I1011 15:10:57.773611    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d497f10eaf9"
	I1011 15:10:57.784906    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:10:57.784915    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:10:57.797632    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:10:57.797644    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:10:57.809179    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:10:57.809191    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:10:57.824413    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:10:57.824423    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:10:57.838734    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:10:57.838743    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:10:57.853179    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:10:57.853192    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:10:57.864642    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:10:57.864653    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:10:57.868822    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:10:57.868832    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:11:00.410577    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:11:05.412822    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:11:05.413401    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:11:05.455757    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:11:05.455899    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:11:05.477946    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:11:05.478071    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:11:05.495185    5145 logs.go:282] 4 containers: [ad656429f4d3 5d497f10eaf9 9fca17df288b f7976848cbf8]
	I1011 15:11:05.495277    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:11:05.507358    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:11:05.507427    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:11:05.518659    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:11:05.518736    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:11:05.530492    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:11:05.530567    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:11:05.540952    5145 logs.go:282] 0 containers: []
	W1011 15:11:05.540961    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:11:05.541027    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:11:05.551400    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:11:05.551420    5145 logs.go:123] Gathering logs for coredns [ad656429f4d3] ...
	I1011 15:11:05.551426    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad656429f4d3"
	I1011 15:11:05.562931    5145 logs.go:123] Gathering logs for coredns [5d497f10eaf9] ...
	I1011 15:11:05.562943    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d497f10eaf9"
	I1011 15:11:05.574360    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:11:05.574371    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:11:05.590382    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:11:05.590395    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:11:05.608196    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:11:05.608208    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:11:05.630917    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:11:05.630923    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:11:05.668040    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:11:05.668048    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:11:05.701834    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:11:05.701847    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:11:05.717544    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:11:05.717554    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:11:05.728934    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:11:05.728943    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:11:05.743064    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:11:05.743077    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:11:05.759294    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:11:05.759305    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:11:05.770979    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:11:05.770991    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:11:05.775566    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:11:05.775574    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:11:05.790043    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:11:05.790052    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:11:08.303646    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:11:13.305764    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:11:13.305843    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:11:13.317002    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:11:13.317078    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:11:13.327521    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:11:13.327581    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:11:13.339099    5145 logs.go:282] 4 containers: [ad656429f4d3 5d497f10eaf9 9fca17df288b f7976848cbf8]
	I1011 15:11:13.339162    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:11:13.350345    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:11:13.350428    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:11:13.360525    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:11:13.360593    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:11:13.378897    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:11:13.378967    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:11:13.398315    5145 logs.go:282] 0 containers: []
	W1011 15:11:13.398331    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:11:13.398388    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:11:13.415874    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:11:13.415894    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:11:13.415902    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:11:13.433523    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:11:13.433533    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:11:13.448186    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:11:13.448197    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:11:13.459476    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:11:13.459486    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:11:13.471361    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:11:13.471371    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:11:13.486797    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:11:13.486807    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:11:13.498162    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:11:13.498172    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:11:13.502883    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:11:13.502889    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:11:13.538964    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:11:13.538974    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:11:13.552780    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:11:13.552791    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:11:13.563994    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:11:13.564004    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:11:13.577059    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:11:13.577069    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:11:13.599738    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:11:13.599749    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:11:13.636689    5145 logs.go:123] Gathering logs for coredns [ad656429f4d3] ...
	I1011 15:11:13.636697    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad656429f4d3"
	I1011 15:11:13.648338    5145 logs.go:123] Gathering logs for coredns [5d497f10eaf9] ...
	I1011 15:11:13.648348    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d497f10eaf9"
	I1011 15:11:16.161963    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:11:21.164829    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:11:21.165272    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:11:21.198055    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:11:21.198197    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:11:21.216631    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:11:21.216735    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:11:21.230860    5145 logs.go:282] 4 containers: [ad656429f4d3 5d497f10eaf9 9fca17df288b f7976848cbf8]
	I1011 15:11:21.230943    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:11:21.242818    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:11:21.242892    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:11:21.253905    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:11:21.253970    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:11:21.264404    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:11:21.264477    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:11:21.275518    5145 logs.go:282] 0 containers: []
	W1011 15:11:21.275532    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:11:21.275598    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:11:21.286322    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:11:21.286340    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:11:21.286346    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:11:21.291305    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:11:21.291313    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:11:21.305517    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:11:21.305527    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:11:21.321114    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:11:21.321128    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:11:21.333887    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:11:21.333898    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:11:21.351520    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:11:21.351530    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:11:21.362597    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:11:21.362606    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:11:21.398190    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:11:21.398199    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:11:21.412897    5145 logs.go:123] Gathering logs for coredns [ad656429f4d3] ...
	I1011 15:11:21.412906    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad656429f4d3"
	I1011 15:11:21.424118    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:11:21.424127    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:11:21.436156    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:11:21.436167    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:11:21.475411    5145 logs.go:123] Gathering logs for coredns [5d497f10eaf9] ...
	I1011 15:11:21.475420    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d497f10eaf9"
	I1011 15:11:21.490733    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:11:21.490742    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:11:21.506315    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:11:21.506325    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:11:21.531338    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:11:21.531357    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:11:24.045066    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:11:29.047227    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:11:29.047737    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1011 15:11:29.098761    5145 logs.go:282] 1 containers: [aed09bb4ddd7]
	I1011 15:11:29.098900    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1011 15:11:29.117762    5145 logs.go:282] 1 containers: [27d6abe27a49]
	I1011 15:11:29.117865    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1011 15:11:29.131260    5145 logs.go:282] 4 containers: [ad656429f4d3 5d497f10eaf9 9fca17df288b f7976848cbf8]
	I1011 15:11:29.131339    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1011 15:11:29.142963    5145 logs.go:282] 1 containers: [1e302c51837e]
	I1011 15:11:29.143045    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1011 15:11:29.153802    5145 logs.go:282] 1 containers: [08dad2a0a778]
	I1011 15:11:29.153874    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1011 15:11:29.165467    5145 logs.go:282] 1 containers: [c90d3a849357]
	I1011 15:11:29.165543    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1011 15:11:29.175908    5145 logs.go:282] 0 containers: []
	W1011 15:11:29.175917    5145 logs.go:284] No container was found matching "kindnet"
	I1011 15:11:29.175974    5145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1011 15:11:29.186434    5145 logs.go:282] 1 containers: [21216854be2d]
	I1011 15:11:29.186451    5145 logs.go:123] Gathering logs for coredns [f7976848cbf8] ...
	I1011 15:11:29.186457    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7976848cbf8"
	I1011 15:11:29.198316    5145 logs.go:123] Gathering logs for coredns [9fca17df288b] ...
	I1011 15:11:29.198327    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fca17df288b"
	I1011 15:11:29.210786    5145 logs.go:123] Gathering logs for describe nodes ...
	I1011 15:11:29.210800    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 15:11:29.246504    5145 logs.go:123] Gathering logs for etcd [27d6abe27a49] ...
	I1011 15:11:29.246518    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d6abe27a49"
	I1011 15:11:29.261270    5145 logs.go:123] Gathering logs for coredns [ad656429f4d3] ...
	I1011 15:11:29.261283    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad656429f4d3"
	I1011 15:11:29.272936    5145 logs.go:123] Gathering logs for kube-scheduler [1e302c51837e] ...
	I1011 15:11:29.272950    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e302c51837e"
	I1011 15:11:29.288377    5145 logs.go:123] Gathering logs for kube-proxy [08dad2a0a778] ...
	I1011 15:11:29.288387    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dad2a0a778"
	I1011 15:11:29.300068    5145 logs.go:123] Gathering logs for kube-controller-manager [c90d3a849357] ...
	I1011 15:11:29.300078    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90d3a849357"
	I1011 15:11:29.317923    5145 logs.go:123] Gathering logs for dmesg ...
	I1011 15:11:29.317933    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 15:11:29.322550    5145 logs.go:123] Gathering logs for Docker ...
	I1011 15:11:29.322558    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1011 15:11:29.345077    5145 logs.go:123] Gathering logs for coredns [5d497f10eaf9] ...
	I1011 15:11:29.345086    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d497f10eaf9"
	I1011 15:11:29.356577    5145 logs.go:123] Gathering logs for kube-apiserver [aed09bb4ddd7] ...
	I1011 15:11:29.356587    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed09bb4ddd7"
	I1011 15:11:29.371119    5145 logs.go:123] Gathering logs for storage-provisioner [21216854be2d] ...
	I1011 15:11:29.371129    5145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21216854be2d"
	I1011 15:11:29.383282    5145 logs.go:123] Gathering logs for container status ...
	I1011 15:11:29.383296    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 15:11:29.395940    5145 logs.go:123] Gathering logs for kubelet ...
	I1011 15:11:29.395953    5145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1011 15:11:31.935056    5145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1011 15:11:36.937658    5145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1011 15:11:36.942146    5145 out.go:201] 
	W1011 15:11:36.945282    5145 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1011 15:11:36.945288    5145 out.go:270] * 
	* 
	W1011 15:11:36.945756    5145 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:11:36.957093    5145 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-583000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (574.16s)

                                                
                                    
x
+
TestPause/serial/Start (10.14s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-973000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-973000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.103763834s)

                                                
                                                
-- stdout --
	* [pause-973000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-973000" primary control-plane node in "pause-973000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-973000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-973000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-973000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-973000 -n pause-973000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-973000 -n pause-973000: exit status 7 (35.331292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-973000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-796000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-796000 --driver=qemu2 : exit status 80 (9.84995175s)

                                                
                                                
-- stdout --
	* [NoKubernetes-796000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-796000" primary control-plane node in "NoKubernetes-796000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-796000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-796000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-796000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-796000 -n NoKubernetes-796000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-796000 -n NoKubernetes-796000: exit status 7 (73.098625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-796000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-796000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-796000 --no-kubernetes --driver=qemu2 : exit status 80 (5.24268025s)

                                                
                                                
-- stdout --
	* [NoKubernetes-796000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-796000
	* Restarting existing qemu2 VM for "NoKubernetes-796000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-796000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-796000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-796000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-796000 -n NoKubernetes-796000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-796000 -n NoKubernetes-796000: exit status 7 (36.224333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-796000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-796000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-796000 --no-kubernetes --driver=qemu2 : exit status 80 (5.261893791s)

                                                
                                                
-- stdout --
	* [NoKubernetes-796000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-796000
	* Restarting existing qemu2 VM for "NoKubernetes-796000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-796000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-796000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-796000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-796000 -n NoKubernetes-796000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-796000 -n NoKubernetes-796000: exit status 7 (72.017792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-796000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-796000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-796000 --driver=qemu2 : exit status 80 (5.275650834s)

                                                
                                                
-- stdout --
	* [NoKubernetes-796000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-796000
	* Restarting existing qemu2 VM for "NoKubernetes-796000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-796000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-796000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-796000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-796000 -n NoKubernetes-796000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-796000 -n NoKubernetes-796000: exit status 7 (60.957208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-796000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.806264208s)

                                                
                                                
-- stdout --
	* [auto-204000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-204000" primary control-plane node in "auto-204000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-204000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:09:59.526559    5370 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:09:59.526764    5370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:09:59.526772    5370 out.go:358] Setting ErrFile to fd 2...
	I1011 15:09:59.526774    5370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:09:59.526912    5370 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:09:59.528141    5370 out.go:352] Setting JSON to false
	I1011 15:09:59.547116    5370 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5969,"bootTime":1728678630,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:09:59.547224    5370 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:09:59.552072    5370 out.go:177] * [auto-204000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:09:59.560126    5370 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:09:59.560213    5370 notify.go:220] Checking for updates...
	I1011 15:09:59.567137    5370 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:09:59.570144    5370 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:09:59.573194    5370 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:09:59.576179    5370 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:09:59.579152    5370 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:09:59.582551    5370 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:09:59.582619    5370 config.go:182] Loaded profile config "stopped-upgrade-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 15:09:59.582664    5370 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:09:59.587134    5370 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 15:09:59.594027    5370 start.go:297] selected driver: qemu2
	I1011 15:09:59.594034    5370 start.go:901] validating driver "qemu2" against <nil>
	I1011 15:09:59.594040    5370 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:09:59.596723    5370 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 15:09:59.600121    5370 out.go:177] * Automatically selected the socket_vmnet network
	I1011 15:09:59.603212    5370 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 15:09:59.603232    5370 cni.go:84] Creating CNI manager for ""
	I1011 15:09:59.603253    5370 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 15:09:59.603260    5370 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 15:09:59.603297    5370 start.go:340] cluster config:
	{Name:auto-204000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:09:59.607964    5370 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:09:59.616098    5370 out.go:177] * Starting "auto-204000" primary control-plane node in "auto-204000" cluster
	I1011 15:09:59.620014    5370 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 15:09:59.620034    5370 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 15:09:59.620047    5370 cache.go:56] Caching tarball of preloaded images
	I1011 15:09:59.620136    5370 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:09:59.620141    5370 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 15:09:59.620197    5370 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/auto-204000/config.json ...
	I1011 15:09:59.620208    5370 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/auto-204000/config.json: {Name:mk93afda54007a3d3d26520bebc832c5ea3e28b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:09:59.620537    5370 start.go:360] acquireMachinesLock for auto-204000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:09:59.620582    5370 start.go:364] duration metric: took 36.542µs to acquireMachinesLock for "auto-204000"
	I1011 15:09:59.620594    5370 start.go:93] Provisioning new machine with config: &{Name:auto-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:09:59.620622    5370 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:09:59.625155    5370 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1011 15:09:59.639626    5370 start.go:159] libmachine.API.Create for "auto-204000" (driver="qemu2")
	I1011 15:09:59.639650    5370 client.go:168] LocalClient.Create starting
	I1011 15:09:59.639716    5370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:09:59.639754    5370 main.go:141] libmachine: Decoding PEM data...
	I1011 15:09:59.639764    5370 main.go:141] libmachine: Parsing certificate...
	I1011 15:09:59.639803    5370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:09:59.639832    5370 main.go:141] libmachine: Decoding PEM data...
	I1011 15:09:59.639840    5370 main.go:141] libmachine: Parsing certificate...
	I1011 15:09:59.640158    5370 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:09:59.797137    5370 main.go:141] libmachine: Creating SSH key...
	I1011 15:09:59.894744    5370 main.go:141] libmachine: Creating Disk image...
	I1011 15:09:59.894755    5370 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:09:59.894965    5370 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/auto-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/auto-204000/disk.qcow2
	I1011 15:09:59.905212    5370 main.go:141] libmachine: STDOUT: 
	I1011 15:09:59.905248    5370 main.go:141] libmachine: STDERR: 
	I1011 15:09:59.905307    5370 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/auto-204000/disk.qcow2 +20000M
	I1011 15:09:59.914101    5370 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:09:59.914117    5370 main.go:141] libmachine: STDERR: 
	I1011 15:09:59.914129    5370 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/auto-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/auto-204000/disk.qcow2
	I1011 15:09:59.914133    5370 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:09:59.914144    5370 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:09:59.914170    5370 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/auto-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/auto-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/auto-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:1c:35:93:12:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/auto-204000/disk.qcow2
	I1011 15:09:59.916023    5370 main.go:141] libmachine: STDOUT: 
	I1011 15:09:59.916038    5370 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:09:59.916065    5370 client.go:171] duration metric: took 276.4145ms to LocalClient.Create
	I1011 15:10:01.918224    5370 start.go:128] duration metric: took 2.297607291s to createHost
	I1011 15:10:01.918312    5370 start.go:83] releasing machines lock for "auto-204000", held for 2.297757208s
	W1011 15:10:01.918372    5370 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:10:01.929346    5370 out.go:177] * Deleting "auto-204000" in qemu2 ...
	W1011 15:10:01.952329    5370 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:10:01.952368    5370 start.go:729] Will try again in 5 seconds ...
	I1011 15:10:06.954549    5370 start.go:360] acquireMachinesLock for auto-204000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:10:06.954974    5370 start.go:364] duration metric: took 312.584µs to acquireMachinesLock for "auto-204000"
	I1011 15:10:06.955079    5370 start.go:93] Provisioning new machine with config: &{Name:auto-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:10:06.955276    5370 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:10:06.961861    5370 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1011 15:10:06.999564    5370 start.go:159] libmachine.API.Create for "auto-204000" (driver="qemu2")
	I1011 15:10:06.999616    5370 client.go:168] LocalClient.Create starting
	I1011 15:10:06.999756    5370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:10:06.999831    5370 main.go:141] libmachine: Decoding PEM data...
	I1011 15:10:06.999852    5370 main.go:141] libmachine: Parsing certificate...
	I1011 15:10:06.999947    5370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:10:06.999999    5370 main.go:141] libmachine: Decoding PEM data...
	I1011 15:10:07.000021    5370 main.go:141] libmachine: Parsing certificate...
	I1011 15:10:07.000634    5370 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:10:07.164485    5370 main.go:141] libmachine: Creating SSH key...
	I1011 15:10:07.231814    5370 main.go:141] libmachine: Creating Disk image...
	I1011 15:10:07.231821    5370 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:10:07.232042    5370 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/auto-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/auto-204000/disk.qcow2
	I1011 15:10:07.241937    5370 main.go:141] libmachine: STDOUT: 
	I1011 15:10:07.241954    5370 main.go:141] libmachine: STDERR: 
	I1011 15:10:07.242012    5370 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/auto-204000/disk.qcow2 +20000M
	I1011 15:10:07.250821    5370 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:10:07.250837    5370 main.go:141] libmachine: STDERR: 
	I1011 15:10:07.250873    5370 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/auto-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/auto-204000/disk.qcow2
	I1011 15:10:07.250879    5370 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:10:07.250889    5370 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:10:07.250927    5370 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/auto-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/auto-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/auto-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:80:ff:bc:02:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/auto-204000/disk.qcow2
	I1011 15:10:07.252961    5370 main.go:141] libmachine: STDOUT: 
	I1011 15:10:07.252975    5370 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:10:07.252989    5370 client.go:171] duration metric: took 253.370708ms to LocalClient.Create
	I1011 15:10:09.255152    5370 start.go:128] duration metric: took 2.299861791s to createHost
	I1011 15:10:09.255239    5370 start.go:83] releasing machines lock for "auto-204000", held for 2.300284916s
	W1011 15:10:09.255537    5370 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:10:09.267151    5370 out.go:201] 
	W1011 15:10:09.271184    5370 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:10:09.271227    5370 out.go:270] * 
	* 
	W1011 15:10:09.273175    5370 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:10:09.286104    5370 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.772283s)

                                                
                                                
-- stdout --
	* [calico-204000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-204000" primary control-plane node in "calico-204000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-204000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:10:11.683743    5484 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:10:11.683896    5484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:10:11.683899    5484 out.go:358] Setting ErrFile to fd 2...
	I1011 15:10:11.683901    5484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:10:11.684038    5484 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:10:11.685258    5484 out.go:352] Setting JSON to false
	I1011 15:10:11.703325    5484 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5981,"bootTime":1728678630,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:10:11.703385    5484 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:10:11.709202    5484 out.go:177] * [calico-204000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:10:11.716109    5484 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:10:11.716193    5484 notify.go:220] Checking for updates...
	I1011 15:10:11.723226    5484 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:10:11.726107    5484 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:10:11.730196    5484 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:10:11.733228    5484 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:10:11.736188    5484 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:10:11.739587    5484 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:10:11.739664    5484 config.go:182] Loaded profile config "stopped-upgrade-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 15:10:11.739722    5484 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:10:11.744173    5484 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 15:10:11.751121    5484 start.go:297] selected driver: qemu2
	I1011 15:10:11.751126    5484 start.go:901] validating driver "qemu2" against <nil>
	I1011 15:10:11.751132    5484 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:10:11.753565    5484 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 15:10:11.757224    5484 out.go:177] * Automatically selected the socket_vmnet network
	I1011 15:10:11.760126    5484 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 15:10:11.760145    5484 cni.go:84] Creating CNI manager for "calico"
	I1011 15:10:11.760148    5484 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1011 15:10:11.760177    5484 start.go:340] cluster config:
	{Name:calico-204000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:10:11.764348    5484 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:10:11.772185    5484 out.go:177] * Starting "calico-204000" primary control-plane node in "calico-204000" cluster
	I1011 15:10:11.776162    5484 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 15:10:11.776175    5484 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 15:10:11.776183    5484 cache.go:56] Caching tarball of preloaded images
	I1011 15:10:11.776255    5484 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:10:11.776260    5484 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 15:10:11.776309    5484 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/calico-204000/config.json ...
	I1011 15:10:11.776320    5484 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/calico-204000/config.json: {Name:mk80779c820387536fe5345b29078c0a345bc788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:10:11.776584    5484 start.go:360] acquireMachinesLock for calico-204000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:10:11.776626    5484 start.go:364] duration metric: took 36.792µs to acquireMachinesLock for "calico-204000"
	I1011 15:10:11.776637    5484 start.go:93] Provisioning new machine with config: &{Name:calico-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:10:11.776661    5484 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:10:11.784134    5484 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1011 15:10:11.799463    5484 start.go:159] libmachine.API.Create for "calico-204000" (driver="qemu2")
	I1011 15:10:11.799500    5484 client.go:168] LocalClient.Create starting
	I1011 15:10:11.799590    5484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:10:11.799631    5484 main.go:141] libmachine: Decoding PEM data...
	I1011 15:10:11.799642    5484 main.go:141] libmachine: Parsing certificate...
	I1011 15:10:11.799677    5484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:10:11.799706    5484 main.go:141] libmachine: Decoding PEM data...
	I1011 15:10:11.799712    5484 main.go:141] libmachine: Parsing certificate...
	I1011 15:10:11.800101    5484 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:10:11.958318    5484 main.go:141] libmachine: Creating SSH key...
	I1011 15:10:12.071234    5484 main.go:141] libmachine: Creating Disk image...
	I1011 15:10:12.071244    5484 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:10:12.071475    5484 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/calico-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/calico-204000/disk.qcow2
	I1011 15:10:12.081535    5484 main.go:141] libmachine: STDOUT: 
	I1011 15:10:12.081555    5484 main.go:141] libmachine: STDERR: 
	I1011 15:10:12.081604    5484 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/calico-204000/disk.qcow2 +20000M
	I1011 15:10:12.090126    5484 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:10:12.090149    5484 main.go:141] libmachine: STDERR: 
	I1011 15:10:12.090165    5484 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/calico-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/calico-204000/disk.qcow2
	I1011 15:10:12.090169    5484 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:10:12.090184    5484 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:10:12.090217    5484 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/calico-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/calico-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/calico-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:3e:e6:ce:ad:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/calico-204000/disk.qcow2
	I1011 15:10:12.092150    5484 main.go:141] libmachine: STDOUT: 
	I1011 15:10:12.092163    5484 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:10:12.092182    5484 client.go:171] duration metric: took 292.680792ms to LocalClient.Create
	I1011 15:10:14.094263    5484 start.go:128] duration metric: took 2.317624334s to createHost
	I1011 15:10:14.094316    5484 start.go:83] releasing machines lock for "calico-204000", held for 2.31772075s
	W1011 15:10:14.094386    5484 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:10:14.103508    5484 out.go:177] * Deleting "calico-204000" in qemu2 ...
	W1011 15:10:14.126188    5484 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:10:14.126201    5484 start.go:729] Will try again in 5 seconds ...
	I1011 15:10:19.128308    5484 start.go:360] acquireMachinesLock for calico-204000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:10:19.128662    5484 start.go:364] duration metric: took 294.083µs to acquireMachinesLock for "calico-204000"
	I1011 15:10:19.128726    5484 start.go:93] Provisioning new machine with config: &{Name:calico-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:10:19.128842    5484 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:10:19.140225    5484 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1011 15:10:19.164424    5484 start.go:159] libmachine.API.Create for "calico-204000" (driver="qemu2")
	I1011 15:10:19.164469    5484 client.go:168] LocalClient.Create starting
	I1011 15:10:19.164557    5484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:10:19.164614    5484 main.go:141] libmachine: Decoding PEM data...
	I1011 15:10:19.164626    5484 main.go:141] libmachine: Parsing certificate...
	I1011 15:10:19.164686    5484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:10:19.164729    5484 main.go:141] libmachine: Decoding PEM data...
	I1011 15:10:19.164743    5484 main.go:141] libmachine: Parsing certificate...
	I1011 15:10:19.165228    5484 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:10:19.321051    5484 main.go:141] libmachine: Creating SSH key...
	I1011 15:10:19.358292    5484 main.go:141] libmachine: Creating Disk image...
	I1011 15:10:19.358297    5484 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:10:19.358506    5484 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/calico-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/calico-204000/disk.qcow2
	I1011 15:10:19.368434    5484 main.go:141] libmachine: STDOUT: 
	I1011 15:10:19.368453    5484 main.go:141] libmachine: STDERR: 
	I1011 15:10:19.368502    5484 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/calico-204000/disk.qcow2 +20000M
	I1011 15:10:19.377086    5484 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:10:19.377104    5484 main.go:141] libmachine: STDERR: 
	I1011 15:10:19.377119    5484 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/calico-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/calico-204000/disk.qcow2
	I1011 15:10:19.377123    5484 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:10:19.377144    5484 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:10:19.377179    5484 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/calico-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/calico-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/calico-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:2b:0e:e7:c7:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/calico-204000/disk.qcow2
	I1011 15:10:19.379074    5484 main.go:141] libmachine: STDOUT: 
	I1011 15:10:19.379088    5484 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:10:19.379102    5484 client.go:171] duration metric: took 214.631375ms to LocalClient.Create
	I1011 15:10:21.381265    5484 start.go:128] duration metric: took 2.252427792s to createHost
	I1011 15:10:21.381376    5484 start.go:83] releasing machines lock for "calico-204000", held for 2.252735584s
	W1011 15:10:21.381829    5484 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:10:21.391475    5484 out.go:201] 
	W1011 15:10:21.397567    5484 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:10:21.397599    5484 out.go:270] * 
	* 
	W1011 15:10:21.400435    5484 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:10:21.409415    5484 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.862440917s)

                                                
                                                
-- stdout --
	* [custom-flannel-204000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-204000" primary control-plane node in "custom-flannel-204000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-204000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:10:23.957506    5601 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:10:23.957670    5601 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:10:23.957674    5601 out.go:358] Setting ErrFile to fd 2...
	I1011 15:10:23.957676    5601 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:10:23.957809    5601 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:10:23.958967    5601 out.go:352] Setting JSON to false
	I1011 15:10:23.977680    5601 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5993,"bootTime":1728678630,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:10:23.977751    5601 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:10:23.982129    5601 out.go:177] * [custom-flannel-204000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:10:23.989996    5601 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:10:23.990096    5601 notify.go:220] Checking for updates...
	I1011 15:10:23.997179    5601 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:10:23.998685    5601 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:10:24.003183    5601 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:10:24.006186    5601 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:10:24.007604    5601 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:10:24.010569    5601 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:10:24.010644    5601 config.go:182] Loaded profile config "stopped-upgrade-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 15:10:24.010681    5601 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:10:24.015173    5601 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 15:10:24.020122    5601 start.go:297] selected driver: qemu2
	I1011 15:10:24.020129    5601 start.go:901] validating driver "qemu2" against <nil>
	I1011 15:10:24.020136    5601 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:10:24.022612    5601 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 15:10:24.025177    5601 out.go:177] * Automatically selected the socket_vmnet network
	I1011 15:10:24.028233    5601 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 15:10:24.028251    5601 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1011 15:10:24.028258    5601 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1011 15:10:24.028290    5601 start.go:340] cluster config:
	{Name:custom-flannel-204000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:10:24.032660    5601 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:10:24.041101    5601 out.go:177] * Starting "custom-flannel-204000" primary control-plane node in "custom-flannel-204000" cluster
	I1011 15:10:24.045118    5601 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 15:10:24.045135    5601 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 15:10:24.045146    5601 cache.go:56] Caching tarball of preloaded images
	I1011 15:10:24.045219    5601 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:10:24.045224    5601 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 15:10:24.045283    5601 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/custom-flannel-204000/config.json ...
	I1011 15:10:24.045293    5601 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/custom-flannel-204000/config.json: {Name:mk68d67232eece12bacc0778dea58aa6aec86301 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:10:24.045517    5601 start.go:360] acquireMachinesLock for custom-flannel-204000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:10:24.045561    5601 start.go:364] duration metric: took 36.75µs to acquireMachinesLock for "custom-flannel-204000"
	I1011 15:10:24.045574    5601 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:10:24.045610    5601 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:10:24.053084    5601 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1011 15:10:24.068577    5601 start.go:159] libmachine.API.Create for "custom-flannel-204000" (driver="qemu2")
	I1011 15:10:24.068607    5601 client.go:168] LocalClient.Create starting
	I1011 15:10:24.068677    5601 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:10:24.068715    5601 main.go:141] libmachine: Decoding PEM data...
	I1011 15:10:24.068725    5601 main.go:141] libmachine: Parsing certificate...
	I1011 15:10:24.068765    5601 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:10:24.068794    5601 main.go:141] libmachine: Decoding PEM data...
	I1011 15:10:24.068803    5601 main.go:141] libmachine: Parsing certificate...
	I1011 15:10:24.069144    5601 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:10:24.224725    5601 main.go:141] libmachine: Creating SSH key...
	I1011 15:10:24.293190    5601 main.go:141] libmachine: Creating Disk image...
	I1011 15:10:24.293203    5601 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:10:24.293428    5601 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/custom-flannel-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/custom-flannel-204000/disk.qcow2
	I1011 15:10:24.303356    5601 main.go:141] libmachine: STDOUT: 
	I1011 15:10:24.303375    5601 main.go:141] libmachine: STDERR: 
	I1011 15:10:24.303434    5601 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/custom-flannel-204000/disk.qcow2 +20000M
	I1011 15:10:24.312191    5601 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:10:24.312210    5601 main.go:141] libmachine: STDERR: 
	I1011 15:10:24.312223    5601 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/custom-flannel-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/custom-flannel-204000/disk.qcow2
	I1011 15:10:24.312233    5601 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:10:24.312244    5601 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:10:24.312277    5601 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/custom-flannel-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/custom-flannel-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/custom-flannel-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:c4:47:f0:56:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/custom-flannel-204000/disk.qcow2
	I1011 15:10:24.314197    5601 main.go:141] libmachine: STDOUT: 
	I1011 15:10:24.314215    5601 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:10:24.314238    5601 client.go:171] duration metric: took 245.630667ms to LocalClient.Create
	I1011 15:10:26.316262    5601 start.go:128] duration metric: took 2.270682833s to createHost
	I1011 15:10:26.316275    5601 start.go:83] releasing machines lock for "custom-flannel-204000", held for 2.270745459s
	W1011 15:10:26.316295    5601 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:10:26.321869    5601 out.go:177] * Deleting "custom-flannel-204000" in qemu2 ...
	W1011 15:10:26.336935    5601 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:10:26.336944    5601 start.go:729] Will try again in 5 seconds ...
	I1011 15:10:31.338157    5601 start.go:360] acquireMachinesLock for custom-flannel-204000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:10:31.338743    5601 start.go:364] duration metric: took 474.25µs to acquireMachinesLock for "custom-flannel-204000"
	I1011 15:10:31.338887    5601 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:10:31.339169    5601 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:10:31.352685    5601 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1011 15:10:31.401284    5601 start.go:159] libmachine.API.Create for "custom-flannel-204000" (driver="qemu2")
	I1011 15:10:31.401341    5601 client.go:168] LocalClient.Create starting
	I1011 15:10:31.401491    5601 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:10:31.401577    5601 main.go:141] libmachine: Decoding PEM data...
	I1011 15:10:31.401594    5601 main.go:141] libmachine: Parsing certificate...
	I1011 15:10:31.401669    5601 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:10:31.401727    5601 main.go:141] libmachine: Decoding PEM data...
	I1011 15:10:31.401737    5601 main.go:141] libmachine: Parsing certificate...
	I1011 15:10:31.402294    5601 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:10:31.569103    5601 main.go:141] libmachine: Creating SSH key...
	I1011 15:10:31.728215    5601 main.go:141] libmachine: Creating Disk image...
	I1011 15:10:31.728224    5601 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:10:31.728477    5601 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/custom-flannel-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/custom-flannel-204000/disk.qcow2
	I1011 15:10:31.739012    5601 main.go:141] libmachine: STDOUT: 
	I1011 15:10:31.739035    5601 main.go:141] libmachine: STDERR: 
	I1011 15:10:31.739105    5601 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/custom-flannel-204000/disk.qcow2 +20000M
	I1011 15:10:31.748135    5601 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:10:31.748151    5601 main.go:141] libmachine: STDERR: 
	I1011 15:10:31.748165    5601 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/custom-flannel-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/custom-flannel-204000/disk.qcow2
	I1011 15:10:31.748169    5601 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:10:31.748184    5601 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:10:31.748214    5601 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/custom-flannel-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/custom-flannel-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/custom-flannel-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:c6:fe:00:80:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/custom-flannel-204000/disk.qcow2
	I1011 15:10:31.750197    5601 main.go:141] libmachine: STDOUT: 
	I1011 15:10:31.750213    5601 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:10:31.750233    5601 client.go:171] duration metric: took 348.892542ms to LocalClient.Create
	I1011 15:10:33.752292    5601 start.go:128] duration metric: took 2.413138292s to createHost
	I1011 15:10:33.752327    5601 start.go:83] releasing machines lock for "custom-flannel-204000", held for 2.413601291s
	W1011 15:10:33.752488    5601 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:10:33.759882    5601 out.go:201] 
	W1011 15:10:33.766946    5601 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:10:33.766954    5601 out.go:270] * 
	* 
	W1011 15:10:33.767477    5601 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:10:33.777880    5601 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.872913625s)

                                                
                                                
-- stdout --
	* [false-204000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-204000" primary control-plane node in "false-204000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-204000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:10:36.336866    5721 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:10:36.337031    5721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:10:36.337035    5721 out.go:358] Setting ErrFile to fd 2...
	I1011 15:10:36.337038    5721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:10:36.337167    5721 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:10:36.338383    5721 out.go:352] Setting JSON to false
	I1011 15:10:36.356483    5721 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6006,"bootTime":1728678630,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:10:36.356558    5721 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:10:36.362197    5721 out.go:177] * [false-204000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:10:36.370152    5721 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:10:36.370207    5721 notify.go:220] Checking for updates...
	I1011 15:10:36.377069    5721 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:10:36.380064    5721 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:10:36.383020    5721 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:10:36.386100    5721 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:10:36.389113    5721 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:10:36.392391    5721 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:10:36.392465    5721 config.go:182] Loaded profile config "stopped-upgrade-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 15:10:36.392507    5721 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:10:36.397023    5721 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 15:10:36.404078    5721 start.go:297] selected driver: qemu2
	I1011 15:10:36.404085    5721 start.go:901] validating driver "qemu2" against <nil>
	I1011 15:10:36.404093    5721 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:10:36.406466    5721 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 15:10:36.410058    5721 out.go:177] * Automatically selected the socket_vmnet network
	I1011 15:10:36.413240    5721 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 15:10:36.413267    5721 cni.go:84] Creating CNI manager for "false"
	I1011 15:10:36.413294    5721 start.go:340] cluster config:
	{Name:false-204000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:10:36.417838    5721 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:10:36.426061    5721 out.go:177] * Starting "false-204000" primary control-plane node in "false-204000" cluster
	I1011 15:10:36.429028    5721 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 15:10:36.429051    5721 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 15:10:36.429060    5721 cache.go:56] Caching tarball of preloaded images
	I1011 15:10:36.429131    5721 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:10:36.429136    5721 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 15:10:36.429186    5721 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/false-204000/config.json ...
	I1011 15:10:36.429196    5721 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/false-204000/config.json: {Name:mk7ad500c3b3a5a526fb2fa4db59db67c4b5fc88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:10:36.429520    5721 start.go:360] acquireMachinesLock for false-204000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:10:36.429565    5721 start.go:364] duration metric: took 39.459µs to acquireMachinesLock for "false-204000"
	I1011 15:10:36.429577    5721 start.go:93] Provisioning new machine with config: &{Name:false-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:10:36.429602    5721 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:10:36.436983    5721 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1011 15:10:36.453017    5721 start.go:159] libmachine.API.Create for "false-204000" (driver="qemu2")
	I1011 15:10:36.453055    5721 client.go:168] LocalClient.Create starting
	I1011 15:10:36.453124    5721 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:10:36.453161    5721 main.go:141] libmachine: Decoding PEM data...
	I1011 15:10:36.453172    5721 main.go:141] libmachine: Parsing certificate...
	I1011 15:10:36.453214    5721 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:10:36.453242    5721 main.go:141] libmachine: Decoding PEM data...
	I1011 15:10:36.453250    5721 main.go:141] libmachine: Parsing certificate...
	I1011 15:10:36.453623    5721 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:10:36.609529    5721 main.go:141] libmachine: Creating SSH key...
	I1011 15:10:36.663130    5721 main.go:141] libmachine: Creating Disk image...
	I1011 15:10:36.663136    5721 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:10:36.663357    5721 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/false-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/false-204000/disk.qcow2
	I1011 15:10:36.673463    5721 main.go:141] libmachine: STDOUT: 
	I1011 15:10:36.673483    5721 main.go:141] libmachine: STDERR: 
	I1011 15:10:36.673537    5721 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/false-204000/disk.qcow2 +20000M
	I1011 15:10:36.681981    5721 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:10:36.681998    5721 main.go:141] libmachine: STDERR: 
	I1011 15:10:36.682013    5721 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/false-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/false-204000/disk.qcow2
	I1011 15:10:36.682019    5721 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:10:36.682032    5721 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:10:36.682070    5721 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/false-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/false-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/false-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:d4:e1:f9:5b:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/false-204000/disk.qcow2
	I1011 15:10:36.683810    5721 main.go:141] libmachine: STDOUT: 
	I1011 15:10:36.683824    5721 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:10:36.683843    5721 client.go:171] duration metric: took 230.786416ms to LocalClient.Create
	I1011 15:10:38.686036    5721 start.go:128] duration metric: took 2.256434959s to createHost
	I1011 15:10:38.686167    5721 start.go:83] releasing machines lock for "false-204000", held for 2.256611083s
	W1011 15:10:38.686283    5721 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:10:38.699671    5721 out.go:177] * Deleting "false-204000" in qemu2 ...
	W1011 15:10:38.724155    5721 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:10:38.724189    5721 start.go:729] Will try again in 5 seconds ...
	I1011 15:10:43.726393    5721 start.go:360] acquireMachinesLock for false-204000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:10:43.727003    5721 start.go:364] duration metric: took 481.125µs to acquireMachinesLock for "false-204000"
	I1011 15:10:43.727146    5721 start.go:93] Provisioning new machine with config: &{Name:false-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:10:43.727448    5721 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:10:43.738157    5721 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1011 15:10:43.786425    5721 start.go:159] libmachine.API.Create for "false-204000" (driver="qemu2")
	I1011 15:10:43.786480    5721 client.go:168] LocalClient.Create starting
	I1011 15:10:43.786631    5721 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:10:43.786717    5721 main.go:141] libmachine: Decoding PEM data...
	I1011 15:10:43.786736    5721 main.go:141] libmachine: Parsing certificate...
	I1011 15:10:43.786803    5721 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:10:43.786860    5721 main.go:141] libmachine: Decoding PEM data...
	I1011 15:10:43.786874    5721 main.go:141] libmachine: Parsing certificate...
	I1011 15:10:43.787550    5721 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:10:43.952844    5721 main.go:141] libmachine: Creating SSH key...
	I1011 15:10:44.118187    5721 main.go:141] libmachine: Creating Disk image...
	I1011 15:10:44.118199    5721 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:10:44.118424    5721 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/false-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/false-204000/disk.qcow2
	I1011 15:10:44.128306    5721 main.go:141] libmachine: STDOUT: 
	I1011 15:10:44.128322    5721 main.go:141] libmachine: STDERR: 
	I1011 15:10:44.128376    5721 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/false-204000/disk.qcow2 +20000M
	I1011 15:10:44.136880    5721 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:10:44.136893    5721 main.go:141] libmachine: STDERR: 
	I1011 15:10:44.136908    5721 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/false-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/false-204000/disk.qcow2
	I1011 15:10:44.136913    5721 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:10:44.136932    5721 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:10:44.136962    5721 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/false-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/false-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/false-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:00:b1:c5:6c:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/false-204000/disk.qcow2
	I1011 15:10:44.138754    5721 main.go:141] libmachine: STDOUT: 
	I1011 15:10:44.138768    5721 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:10:44.138781    5721 client.go:171] duration metric: took 352.299291ms to LocalClient.Create
	I1011 15:10:46.140883    5721 start.go:128] duration metric: took 2.4134465s to createHost
	I1011 15:10:46.140951    5721 start.go:83] releasing machines lock for "false-204000", held for 2.413964083s
	W1011 15:10:46.141077    5721 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:10:46.150529    5721 out.go:201] 
	W1011 15:10:46.154049    5721 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:10:46.154065    5721 out.go:270] * 
	* 
	W1011 15:10:46.154995    5721 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:10:46.167493    5721 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.80013575s)

                                                
                                                
-- stdout --
	* [kindnet-204000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-204000" primary control-plane node in "kindnet-204000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-204000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:10:48.500912    5832 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:10:48.501058    5832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:10:48.501061    5832 out.go:358] Setting ErrFile to fd 2...
	I1011 15:10:48.501064    5832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:10:48.501222    5832 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:10:48.502425    5832 out.go:352] Setting JSON to false
	I1011 15:10:48.520656    5832 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6018,"bootTime":1728678630,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:10:48.520721    5832 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:10:48.530016    5832 out.go:177] * [kindnet-204000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:10:48.531691    5832 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:10:48.531741    5832 notify.go:220] Checking for updates...
	I1011 15:10:48.539053    5832 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:10:48.542919    5832 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:10:48.547040    5832 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:10:48.550066    5832 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:10:48.552994    5832 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:10:48.556449    5832 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:10:48.556523    5832 config.go:182] Loaded profile config "stopped-upgrade-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 15:10:48.556568    5832 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:10:48.561039    5832 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 15:10:48.568013    5832 start.go:297] selected driver: qemu2
	I1011 15:10:48.568019    5832 start.go:901] validating driver "qemu2" against <nil>
	I1011 15:10:48.568026    5832 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:10:48.570419    5832 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 15:10:48.573100    5832 out.go:177] * Automatically selected the socket_vmnet network
	I1011 15:10:48.576121    5832 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 15:10:48.576146    5832 cni.go:84] Creating CNI manager for "kindnet"
	I1011 15:10:48.576150    5832 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1011 15:10:48.576183    5832 start.go:340] cluster config:
	{Name:kindnet-204000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:10:48.580431    5832 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:10:48.588038    5832 out.go:177] * Starting "kindnet-204000" primary control-plane node in "kindnet-204000" cluster
	I1011 15:10:48.592055    5832 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 15:10:48.592070    5832 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 15:10:48.592079    5832 cache.go:56] Caching tarball of preloaded images
	I1011 15:10:48.592172    5832 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:10:48.592177    5832 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 15:10:48.592240    5832 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/kindnet-204000/config.json ...
	I1011 15:10:48.592249    5832 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/kindnet-204000/config.json: {Name:mkea5e0096c7cf19588907be5264d94ff7df1971 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:10:48.592470    5832 start.go:360] acquireMachinesLock for kindnet-204000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:10:48.592512    5832 start.go:364] duration metric: took 36.625µs to acquireMachinesLock for "kindnet-204000"
	I1011 15:10:48.592524    5832 start.go:93] Provisioning new machine with config: &{Name:kindnet-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:10:48.592561    5832 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:10:48.596085    5832 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1011 15:10:48.611103    5832 start.go:159] libmachine.API.Create for "kindnet-204000" (driver="qemu2")
	I1011 15:10:48.611128    5832 client.go:168] LocalClient.Create starting
	I1011 15:10:48.611193    5832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:10:48.611229    5832 main.go:141] libmachine: Decoding PEM data...
	I1011 15:10:48.611240    5832 main.go:141] libmachine: Parsing certificate...
	I1011 15:10:48.611282    5832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:10:48.611314    5832 main.go:141] libmachine: Decoding PEM data...
	I1011 15:10:48.611328    5832 main.go:141] libmachine: Parsing certificate...
	I1011 15:10:48.611720    5832 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:10:48.765755    5832 main.go:141] libmachine: Creating SSH key...
	I1011 15:10:48.798771    5832 main.go:141] libmachine: Creating Disk image...
	I1011 15:10:48.798776    5832 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:10:48.798987    5832 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kindnet-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kindnet-204000/disk.qcow2
	I1011 15:10:48.808825    5832 main.go:141] libmachine: STDOUT: 
	I1011 15:10:48.808857    5832 main.go:141] libmachine: STDERR: 
	I1011 15:10:48.808919    5832 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kindnet-204000/disk.qcow2 +20000M
	I1011 15:10:48.817481    5832 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:10:48.817497    5832 main.go:141] libmachine: STDERR: 
	I1011 15:10:48.817514    5832 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kindnet-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kindnet-204000/disk.qcow2
	I1011 15:10:48.817518    5832 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:10:48.817530    5832 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:10:48.817568    5832 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kindnet-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kindnet-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kindnet-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:cb:f1:48:2e:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kindnet-204000/disk.qcow2
	I1011 15:10:48.819455    5832 main.go:141] libmachine: STDOUT: 
	I1011 15:10:48.819469    5832 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:10:48.819489    5832 client.go:171] duration metric: took 208.359792ms to LocalClient.Create
	I1011 15:10:50.821677    5832 start.go:128] duration metric: took 2.2291205s to createHost
	I1011 15:10:50.821793    5832 start.go:83] releasing machines lock for "kindnet-204000", held for 2.229306875s
	W1011 15:10:50.821895    5832 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:10:50.832173    5832 out.go:177] * Deleting "kindnet-204000" in qemu2 ...
	W1011 15:10:50.858473    5832 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:10:50.858514    5832 start.go:729] Will try again in 5 seconds ...
	I1011 15:10:55.860614    5832 start.go:360] acquireMachinesLock for kindnet-204000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:10:55.861305    5832 start.go:364] duration metric: took 603.5µs to acquireMachinesLock for "kindnet-204000"
	I1011 15:10:55.861446    5832 start.go:93] Provisioning new machine with config: &{Name:kindnet-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:10:55.861741    5832 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:10:55.871363    5832 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1011 15:10:55.922250    5832 start.go:159] libmachine.API.Create for "kindnet-204000" (driver="qemu2")
	I1011 15:10:55.922299    5832 client.go:168] LocalClient.Create starting
	I1011 15:10:55.922451    5832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:10:55.922539    5832 main.go:141] libmachine: Decoding PEM data...
	I1011 15:10:55.922561    5832 main.go:141] libmachine: Parsing certificate...
	I1011 15:10:55.922632    5832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:10:55.922691    5832 main.go:141] libmachine: Decoding PEM data...
	I1011 15:10:55.922703    5832 main.go:141] libmachine: Parsing certificate...
	I1011 15:10:55.923306    5832 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:10:56.089267    5832 main.go:141] libmachine: Creating SSH key...
	I1011 15:10:56.207088    5832 main.go:141] libmachine: Creating Disk image...
	I1011 15:10:56.207099    5832 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:10:56.207330    5832 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kindnet-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kindnet-204000/disk.qcow2
	I1011 15:10:56.217420    5832 main.go:141] libmachine: STDOUT: 
	I1011 15:10:56.217444    5832 main.go:141] libmachine: STDERR: 
	I1011 15:10:56.217498    5832 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kindnet-204000/disk.qcow2 +20000M
	I1011 15:10:56.226062    5832 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:10:56.226113    5832 main.go:141] libmachine: STDERR: 
	I1011 15:10:56.226126    5832 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kindnet-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kindnet-204000/disk.qcow2
	I1011 15:10:56.226130    5832 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:10:56.226141    5832 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:10:56.226170    5832 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kindnet-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kindnet-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kindnet-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:11:67:33:65:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kindnet-204000/disk.qcow2
	I1011 15:10:56.228034    5832 main.go:141] libmachine: STDOUT: 
	I1011 15:10:56.228128    5832 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:10:56.228155    5832 client.go:171] duration metric: took 305.857ms to LocalClient.Create
	I1011 15:10:58.230318    5832 start.go:128] duration metric: took 2.368574917s to createHost
	I1011 15:10:58.230404    5832 start.go:83] releasing machines lock for "kindnet-204000", held for 2.369113041s
	W1011 15:10:58.230766    5832 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:10:58.242354    5832 out.go:201] 
	W1011 15:10:58.246217    5832 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:10:58.246239    5832 out.go:270] * 
	* 
	W1011 15:10:58.248239    5832 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:10:58.258249    5832 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.767336917s)

                                                
                                                
-- stdout --
	* [flannel-204000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-204000" primary control-plane node in "flannel-204000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-204000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:11:00.714616    5945 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:11:00.714782    5945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:11:00.714786    5945 out.go:358] Setting ErrFile to fd 2...
	I1011 15:11:00.714788    5945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:11:00.714918    5945 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:11:00.716140    5945 out.go:352] Setting JSON to false
	I1011 15:11:00.733868    5945 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6030,"bootTime":1728678630,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:11:00.733942    5945 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:11:00.739908    5945 out.go:177] * [flannel-204000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:11:00.747830    5945 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:11:00.747919    5945 notify.go:220] Checking for updates...
	I1011 15:11:00.754882    5945 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:11:00.757838    5945 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:11:00.760933    5945 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:11:00.763884    5945 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:11:00.766850    5945 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:11:00.770241    5945 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:11:00.770308    5945 config.go:182] Loaded profile config "stopped-upgrade-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 15:11:00.770364    5945 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:11:00.774890    5945 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 15:11:00.781883    5945 start.go:297] selected driver: qemu2
	I1011 15:11:00.781890    5945 start.go:901] validating driver "qemu2" against <nil>
	I1011 15:11:00.781897    5945 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:11:00.784328    5945 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 15:11:00.787860    5945 out.go:177] * Automatically selected the socket_vmnet network
	I1011 15:11:00.790841    5945 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 15:11:00.790856    5945 cni.go:84] Creating CNI manager for "flannel"
	I1011 15:11:00.790859    5945 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1011 15:11:00.790893    5945 start.go:340] cluster config:
	{Name:flannel-204000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:11:00.795103    5945 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:11:00.802768    5945 out.go:177] * Starting "flannel-204000" primary control-plane node in "flannel-204000" cluster
	I1011 15:11:00.806890    5945 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 15:11:00.806906    5945 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 15:11:00.806915    5945 cache.go:56] Caching tarball of preloaded images
	I1011 15:11:00.807000    5945 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:11:00.807006    5945 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 15:11:00.807055    5945 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/flannel-204000/config.json ...
	I1011 15:11:00.807071    5945 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/flannel-204000/config.json: {Name:mk28b4cd1291e429d2d950495ec25ef89c7c3b77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:11:00.807300    5945 start.go:360] acquireMachinesLock for flannel-204000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:11:00.807342    5945 start.go:364] duration metric: took 36.625µs to acquireMachinesLock for "flannel-204000"
	I1011 15:11:00.807353    5945 start.go:93] Provisioning new machine with config: &{Name:flannel-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:11:00.807380    5945 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:11:00.810755    5945 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1011 15:11:00.825690    5945 start.go:159] libmachine.API.Create for "flannel-204000" (driver="qemu2")
	I1011 15:11:00.825717    5945 client.go:168] LocalClient.Create starting
	I1011 15:11:00.825784    5945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:11:00.825822    5945 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:00.825832    5945 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:00.825869    5945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:11:00.825898    5945 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:00.825908    5945 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:00.826248    5945 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:11:00.980160    5945 main.go:141] libmachine: Creating SSH key...
	I1011 15:11:01.027245    5945 main.go:141] libmachine: Creating Disk image...
	I1011 15:11:01.027251    5945 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:11:01.027467    5945 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/flannel-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/flannel-204000/disk.qcow2
	I1011 15:11:01.037345    5945 main.go:141] libmachine: STDOUT: 
	I1011 15:11:01.037368    5945 main.go:141] libmachine: STDERR: 
	I1011 15:11:01.037427    5945 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/flannel-204000/disk.qcow2 +20000M
	I1011 15:11:01.046363    5945 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:11:01.046377    5945 main.go:141] libmachine: STDERR: 
	I1011 15:11:01.046401    5945 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/flannel-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/flannel-204000/disk.qcow2
	I1011 15:11:01.046407    5945 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:11:01.046419    5945 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:11:01.046448    5945 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/flannel-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/flannel-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/flannel-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:4f:6a:35:5e:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/flannel-204000/disk.qcow2
	I1011 15:11:01.048328    5945 main.go:141] libmachine: STDOUT: 
	I1011 15:11:01.048342    5945 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:11:01.048364    5945 client.go:171] duration metric: took 222.6445ms to LocalClient.Create
	I1011 15:11:03.050607    5945 start.go:128] duration metric: took 2.243236s to createHost
	I1011 15:11:03.050675    5945 start.go:83] releasing machines lock for "flannel-204000", held for 2.243360875s
	W1011 15:11:03.050731    5945 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:11:03.063705    5945 out.go:177] * Deleting "flannel-204000" in qemu2 ...
	W1011 15:11:03.086443    5945 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:11:03.086474    5945 start.go:729] Will try again in 5 seconds ...
	I1011 15:11:08.088729    5945 start.go:360] acquireMachinesLock for flannel-204000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:11:08.089420    5945 start.go:364] duration metric: took 564.417µs to acquireMachinesLock for "flannel-204000"
	I1011 15:11:08.089494    5945 start.go:93] Provisioning new machine with config: &{Name:flannel-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:11:08.089823    5945 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:11:08.094472    5945 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1011 15:11:08.143295    5945 start.go:159] libmachine.API.Create for "flannel-204000" (driver="qemu2")
	I1011 15:11:08.143368    5945 client.go:168] LocalClient.Create starting
	I1011 15:11:08.143509    5945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:11:08.143591    5945 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:08.143609    5945 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:08.143689    5945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:11:08.143751    5945 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:08.143763    5945 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:08.144465    5945 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:11:08.309519    5945 main.go:141] libmachine: Creating SSH key...
	I1011 15:11:08.386654    5945 main.go:141] libmachine: Creating Disk image...
	I1011 15:11:08.386660    5945 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:11:08.386885    5945 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/flannel-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/flannel-204000/disk.qcow2
	I1011 15:11:08.397295    5945 main.go:141] libmachine: STDOUT: 
	I1011 15:11:08.397313    5945 main.go:141] libmachine: STDERR: 
	I1011 15:11:08.397370    5945 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/flannel-204000/disk.qcow2 +20000M
	I1011 15:11:08.406105    5945 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:11:08.406121    5945 main.go:141] libmachine: STDERR: 
	I1011 15:11:08.406133    5945 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/flannel-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/flannel-204000/disk.qcow2
	I1011 15:11:08.406139    5945 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:11:08.406150    5945 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:11:08.406182    5945 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/flannel-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/flannel-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/flannel-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:7b:c9:5a:c0:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/flannel-204000/disk.qcow2
	I1011 15:11:08.408141    5945 main.go:141] libmachine: STDOUT: 
	I1011 15:11:08.408155    5945 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:11:08.408166    5945 client.go:171] duration metric: took 264.79525ms to LocalClient.Create
	I1011 15:11:10.410314    5945 start.go:128] duration metric: took 2.320491417s to createHost
	I1011 15:11:10.410387    5945 start.go:83] releasing machines lock for "flannel-204000", held for 2.320978709s
	W1011 15:11:10.410759    5945 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:11:10.423406    5945 out.go:201] 
	W1011 15:11:10.426457    5945 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:11:10.426494    5945 out.go:270] * 
	* 
	W1011 15:11:10.428439    5945 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:11:10.437362    5945 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.885579417s)

                                                
                                                
-- stdout --
	* [enable-default-cni-204000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-204000" primary control-plane node in "enable-default-cni-204000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-204000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:11:12.996985    6062 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:11:12.997158    6062 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:11:12.997162    6062 out.go:358] Setting ErrFile to fd 2...
	I1011 15:11:12.997164    6062 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:11:12.997296    6062 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:11:12.998578    6062 out.go:352] Setting JSON to false
	I1011 15:11:13.017070    6062 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6043,"bootTime":1728678630,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:11:13.017138    6062 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:11:13.021798    6062 out.go:177] * [enable-default-cni-204000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:11:13.029722    6062 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:11:13.029753    6062 notify.go:220] Checking for updates...
	I1011 15:11:13.036758    6062 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:11:13.039770    6062 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:11:13.043691    6062 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:11:13.046795    6062 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:11:13.049812    6062 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:11:13.053114    6062 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:11:13.053200    6062 config.go:182] Loaded profile config "stopped-upgrade-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 15:11:13.053244    6062 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:11:13.056756    6062 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 15:11:13.063742    6062 start.go:297] selected driver: qemu2
	I1011 15:11:13.063748    6062 start.go:901] validating driver "qemu2" against <nil>
	I1011 15:11:13.063755    6062 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:11:13.066118    6062 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 15:11:13.070749    6062 out.go:177] * Automatically selected the socket_vmnet network
	E1011 15:11:13.073778    6062 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1011 15:11:13.073795    6062 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 15:11:13.073818    6062 cni.go:84] Creating CNI manager for "bridge"
	I1011 15:11:13.073831    6062 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 15:11:13.073875    6062 start.go:340] cluster config:
	{Name:enable-default-cni-204000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:11:13.078341    6062 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:11:13.085782    6062 out.go:177] * Starting "enable-default-cni-204000" primary control-plane node in "enable-default-cni-204000" cluster
	I1011 15:11:13.089744    6062 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 15:11:13.089761    6062 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 15:11:13.089777    6062 cache.go:56] Caching tarball of preloaded images
	I1011 15:11:13.089854    6062 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:11:13.089860    6062 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 15:11:13.089922    6062 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/enable-default-cni-204000/config.json ...
	I1011 15:11:13.089933    6062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/enable-default-cni-204000/config.json: {Name:mk73a9c82ba3aaf9ffe79a5142c3d91107100598 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:11:13.090166    6062 start.go:360] acquireMachinesLock for enable-default-cni-204000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:11:13.090215    6062 start.go:364] duration metric: took 41.875µs to acquireMachinesLock for "enable-default-cni-204000"
	I1011 15:11:13.090228    6062 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:11:13.090266    6062 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:11:13.093658    6062 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1011 15:11:13.109105    6062 start.go:159] libmachine.API.Create for "enable-default-cni-204000" (driver="qemu2")
	I1011 15:11:13.109130    6062 client.go:168] LocalClient.Create starting
	I1011 15:11:13.109207    6062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:11:13.109250    6062 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:13.109261    6062 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:13.109302    6062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:11:13.109331    6062 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:13.109350    6062 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:13.109711    6062 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:11:13.264082    6062 main.go:141] libmachine: Creating SSH key...
	I1011 15:11:13.321574    6062 main.go:141] libmachine: Creating Disk image...
	I1011 15:11:13.321582    6062 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:11:13.321839    6062 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/enable-default-cni-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/enable-default-cni-204000/disk.qcow2
	I1011 15:11:13.332757    6062 main.go:141] libmachine: STDOUT: 
	I1011 15:11:13.332780    6062 main.go:141] libmachine: STDERR: 
	I1011 15:11:13.332853    6062 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/enable-default-cni-204000/disk.qcow2 +20000M
	I1011 15:11:13.343003    6062 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:11:13.343066    6062 main.go:141] libmachine: STDERR: 
	I1011 15:11:13.343087    6062 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/enable-default-cni-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/enable-default-cni-204000/disk.qcow2
	I1011 15:11:13.343094    6062 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:11:13.343111    6062 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:11:13.343142    6062 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/enable-default-cni-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/enable-default-cni-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/enable-default-cni-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:c1:f6:d6:16:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/enable-default-cni-204000/disk.qcow2
	I1011 15:11:13.345402    6062 main.go:141] libmachine: STDOUT: 
	I1011 15:11:13.345422    6062 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:11:13.345444    6062 client.go:171] duration metric: took 236.311167ms to LocalClient.Create
	I1011 15:11:15.347626    6062 start.go:128] duration metric: took 2.257365292s to createHost
	I1011 15:11:15.347729    6062 start.go:83] releasing machines lock for "enable-default-cni-204000", held for 2.257539166s
	W1011 15:11:15.347786    6062 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:11:15.361944    6062 out.go:177] * Deleting "enable-default-cni-204000" in qemu2 ...
	W1011 15:11:15.386048    6062 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:11:15.386082    6062 start.go:729] Will try again in 5 seconds ...
	I1011 15:11:20.388208    6062 start.go:360] acquireMachinesLock for enable-default-cni-204000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:11:20.388755    6062 start.go:364] duration metric: took 457.375µs to acquireMachinesLock for "enable-default-cni-204000"
	I1011 15:11:20.388876    6062 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:11:20.389124    6062 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:11:20.397032    6062 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1011 15:11:20.440452    6062 start.go:159] libmachine.API.Create for "enable-default-cni-204000" (driver="qemu2")
	I1011 15:11:20.440516    6062 client.go:168] LocalClient.Create starting
	I1011 15:11:20.440654    6062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:11:20.440725    6062 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:20.440741    6062 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:20.440804    6062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:11:20.440854    6062 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:20.440867    6062 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:20.441531    6062 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:11:20.612867    6062 main.go:141] libmachine: Creating SSH key...
	I1011 15:11:20.783563    6062 main.go:141] libmachine: Creating Disk image...
	I1011 15:11:20.783572    6062 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:11:20.783844    6062 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/enable-default-cni-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/enable-default-cni-204000/disk.qcow2
	I1011 15:11:20.794151    6062 main.go:141] libmachine: STDOUT: 
	I1011 15:11:20.794167    6062 main.go:141] libmachine: STDERR: 
	I1011 15:11:20.794226    6062 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/enable-default-cni-204000/disk.qcow2 +20000M
	I1011 15:11:20.802694    6062 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:11:20.802710    6062 main.go:141] libmachine: STDERR: 
	I1011 15:11:20.802721    6062 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/enable-default-cni-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/enable-default-cni-204000/disk.qcow2
	I1011 15:11:20.802731    6062 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:11:20.802742    6062 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:11:20.802775    6062 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/enable-default-cni-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/enable-default-cni-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/enable-default-cni-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:d7:04:3c:75:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/enable-default-cni-204000/disk.qcow2
	I1011 15:11:20.804641    6062 main.go:141] libmachine: STDOUT: 
	I1011 15:11:20.804658    6062 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:11:20.804671    6062 client.go:171] duration metric: took 364.154958ms to LocalClient.Create
	I1011 15:11:22.806864    6062 start.go:128] duration metric: took 2.417740416s to createHost
	I1011 15:11:22.806977    6062 start.go:83] releasing machines lock for "enable-default-cni-204000", held for 2.418235917s
	W1011 15:11:22.807377    6062 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:11:22.819033    6062 out.go:201] 
	W1011 15:11:22.823180    6062 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:11:22.823207    6062 out.go:270] * 
	* 
	W1011 15:11:22.825838    6062 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:11:22.836004    6062 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
E1011 15:11:29.277087    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.9058725s)

                                                
                                                
-- stdout --
	* [bridge-204000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-204000" primary control-plane node in "bridge-204000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-204000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:11:25.221777    6175 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:11:25.221915    6175 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:11:25.221918    6175 out.go:358] Setting ErrFile to fd 2...
	I1011 15:11:25.221920    6175 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:11:25.222065    6175 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:11:25.223205    6175 out.go:352] Setting JSON to false
	I1011 15:11:25.241197    6175 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6055,"bootTime":1728678630,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:11:25.241278    6175 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:11:25.246176    6175 out.go:177] * [bridge-204000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:11:25.254045    6175 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:11:25.254061    6175 notify.go:220] Checking for updates...
	I1011 15:11:25.260113    6175 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:11:25.263134    6175 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:11:25.266179    6175 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:11:25.269159    6175 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:11:25.272107    6175 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:11:25.275568    6175 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:11:25.275635    6175 config.go:182] Loaded profile config "stopped-upgrade-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 15:11:25.275683    6175 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:11:25.280163    6175 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 15:11:25.287132    6175 start.go:297] selected driver: qemu2
	I1011 15:11:25.287137    6175 start.go:901] validating driver "qemu2" against <nil>
	I1011 15:11:25.287144    6175 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:11:25.289644    6175 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 15:11:25.294204    6175 out.go:177] * Automatically selected the socket_vmnet network
	I1011 15:11:25.297164    6175 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 15:11:25.297180    6175 cni.go:84] Creating CNI manager for "bridge"
	I1011 15:11:25.297187    6175 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 15:11:25.297213    6175 start.go:340] cluster config:
	{Name:bridge-204000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:11:25.301755    6175 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:11:25.309933    6175 out.go:177] * Starting "bridge-204000" primary control-plane node in "bridge-204000" cluster
	I1011 15:11:25.314146    6175 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 15:11:25.314165    6175 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 15:11:25.314175    6175 cache.go:56] Caching tarball of preloaded images
	I1011 15:11:25.314252    6175 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:11:25.314258    6175 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 15:11:25.314311    6175 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/bridge-204000/config.json ...
	I1011 15:11:25.314324    6175 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/bridge-204000/config.json: {Name:mk0ac9b59d5556cd902f5a097636dfe5a4edd462 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:11:25.314554    6175 start.go:360] acquireMachinesLock for bridge-204000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:11:25.314600    6175 start.go:364] duration metric: took 40.875µs to acquireMachinesLock for "bridge-204000"
	I1011 15:11:25.314612    6175 start.go:93] Provisioning new machine with config: &{Name:bridge-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:11:25.314634    6175 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:11:25.322089    6175 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1011 15:11:25.336768    6175 start.go:159] libmachine.API.Create for "bridge-204000" (driver="qemu2")
	I1011 15:11:25.336805    6175 client.go:168] LocalClient.Create starting
	I1011 15:11:25.336872    6175 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:11:25.336915    6175 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:25.336925    6175 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:25.336961    6175 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:11:25.336990    6175 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:25.337000    6175 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:25.337368    6175 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:11:25.491942    6175 main.go:141] libmachine: Creating SSH key...
	I1011 15:11:25.536442    6175 main.go:141] libmachine: Creating Disk image...
	I1011 15:11:25.536453    6175 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:11:25.536669    6175 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/bridge-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/bridge-204000/disk.qcow2
	I1011 15:11:25.546811    6175 main.go:141] libmachine: STDOUT: 
	I1011 15:11:25.546837    6175 main.go:141] libmachine: STDERR: 
	I1011 15:11:25.546894    6175 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/bridge-204000/disk.qcow2 +20000M
	I1011 15:11:25.555641    6175 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:11:25.555659    6175 main.go:141] libmachine: STDERR: 
	I1011 15:11:25.555683    6175 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/bridge-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/bridge-204000/disk.qcow2
	I1011 15:11:25.555689    6175 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:11:25.555702    6175 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:11:25.555736    6175 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/bridge-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/bridge-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/bridge-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:55:2d:71:2c:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/bridge-204000/disk.qcow2
	I1011 15:11:25.557638    6175 main.go:141] libmachine: STDOUT: 
	I1011 15:11:25.557654    6175 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:11:25.557675    6175 client.go:171] duration metric: took 220.868125ms to LocalClient.Create
	I1011 15:11:27.559765    6175 start.go:128] duration metric: took 2.245153958s to createHost
	I1011 15:11:27.559828    6175 start.go:83] releasing machines lock for "bridge-204000", held for 2.245257625s
	W1011 15:11:27.559858    6175 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:11:27.566889    6175 out.go:177] * Deleting "bridge-204000" in qemu2 ...
	W1011 15:11:27.579438    6175 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:11:27.579455    6175 start.go:729] Will try again in 5 seconds ...
	I1011 15:11:32.581822    6175 start.go:360] acquireMachinesLock for bridge-204000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:11:32.582608    6175 start.go:364] duration metric: took 619.417µs to acquireMachinesLock for "bridge-204000"
	I1011 15:11:32.582718    6175 start.go:93] Provisioning new machine with config: &{Name:bridge-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:11:32.583025    6175 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:11:32.593679    6175 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1011 15:11:32.640165    6175 start.go:159] libmachine.API.Create for "bridge-204000" (driver="qemu2")
	I1011 15:11:32.640226    6175 client.go:168] LocalClient.Create starting
	I1011 15:11:32.640373    6175 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:11:32.640450    6175 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:32.640466    6175 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:32.640527    6175 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:11:32.640601    6175 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:32.640612    6175 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:32.641165    6175 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:11:32.806768    6175 main.go:141] libmachine: Creating SSH key...
	I1011 15:11:33.028662    6175 main.go:141] libmachine: Creating Disk image...
	I1011 15:11:33.028672    6175 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:11:33.028933    6175 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/bridge-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/bridge-204000/disk.qcow2
	I1011 15:11:33.039404    6175 main.go:141] libmachine: STDOUT: 
	I1011 15:11:33.039427    6175 main.go:141] libmachine: STDERR: 
	I1011 15:11:33.039492    6175 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/bridge-204000/disk.qcow2 +20000M
	I1011 15:11:33.048241    6175 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:11:33.048256    6175 main.go:141] libmachine: STDERR: 
	I1011 15:11:33.048272    6175 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/bridge-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/bridge-204000/disk.qcow2
	I1011 15:11:33.048278    6175 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:11:33.048288    6175 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:11:33.048321    6175 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/bridge-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/bridge-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/bridge-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:31:e0:13:ec:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/bridge-204000/disk.qcow2
	I1011 15:11:33.050187    6175 main.go:141] libmachine: STDOUT: 
	I1011 15:11:33.050204    6175 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:11:33.050216    6175 client.go:171] duration metric: took 409.989625ms to LocalClient.Create
	I1011 15:11:35.052402    6175 start.go:128] duration metric: took 2.469370125s to createHost
	I1011 15:11:35.052523    6175 start.go:83] releasing machines lock for "bridge-204000", held for 2.469890042s
	W1011 15:11:35.052898    6175 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:11:35.062517    6175 out.go:201] 
	W1011 15:11:35.068676    6175 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:11:35.068718    6175 out.go:270] * 
	* 
	W1011 15:11:35.071255    6175 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:11:35.080565    6175 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (10.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (10.130816334s)

                                                
                                                
-- stdout --
	* [kubenet-204000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-204000" primary control-plane node in "kubenet-204000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-204000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:11:37.583640    6289 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:11:37.583802    6289 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:11:37.583806    6289 out.go:358] Setting ErrFile to fd 2...
	I1011 15:11:37.583808    6289 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:11:37.583939    6289 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:11:37.585296    6289 out.go:352] Setting JSON to false
	I1011 15:11:37.606082    6289 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6067,"bootTime":1728678630,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:11:37.606168    6289 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:11:37.610319    6289 out.go:177] * [kubenet-204000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:11:37.617284    6289 notify.go:220] Checking for updates...
	I1011 15:11:37.620130    6289 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:11:37.628096    6289 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:11:37.635240    6289 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:11:37.643217    6289 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:11:37.651138    6289 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:11:37.659228    6289 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:11:37.663606    6289 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:11:37.663690    6289 config.go:182] Loaded profile config "stopped-upgrade-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 15:11:37.663739    6289 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:11:37.675232    6289 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 15:11:37.682115    6289 start.go:297] selected driver: qemu2
	I1011 15:11:37.682119    6289 start.go:901] validating driver "qemu2" against <nil>
	I1011 15:11:37.682124    6289 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:11:37.684779    6289 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 15:11:37.689223    6289 out.go:177] * Automatically selected the socket_vmnet network
	I1011 15:11:37.692305    6289 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 15:11:37.692325    6289 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1011 15:11:37.692366    6289 start.go:340] cluster config:
	{Name:kubenet-204000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:11:37.697285    6289 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:11:37.705244    6289 out.go:177] * Starting "kubenet-204000" primary control-plane node in "kubenet-204000" cluster
	I1011 15:11:37.713053    6289 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 15:11:37.713077    6289 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 15:11:37.713088    6289 cache.go:56] Caching tarball of preloaded images
	I1011 15:11:37.713183    6289 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:11:37.713189    6289 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 15:11:37.713254    6289 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/kubenet-204000/config.json ...
	I1011 15:11:37.713265    6289 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/kubenet-204000/config.json: {Name:mkc0c6d5c055ad00d9266d5f36244f7f146b8e7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:11:37.713554    6289 start.go:360] acquireMachinesLock for kubenet-204000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:11:37.713597    6289 start.go:364] duration metric: took 38.375µs to acquireMachinesLock for "kubenet-204000"
	I1011 15:11:37.713609    6289 start.go:93] Provisioning new machine with config: &{Name:kubenet-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:11:37.713648    6289 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:11:37.722245    6289 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1011 15:11:37.737626    6289 start.go:159] libmachine.API.Create for "kubenet-204000" (driver="qemu2")
	I1011 15:11:37.737656    6289 client.go:168] LocalClient.Create starting
	I1011 15:11:37.737722    6289 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:11:37.737761    6289 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:37.737771    6289 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:37.737812    6289 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:11:37.737841    6289 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:37.737851    6289 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:37.739828    6289 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:11:38.059431    6289 main.go:141] libmachine: Creating SSH key...
	I1011 15:11:38.178855    6289 main.go:141] libmachine: Creating Disk image...
	I1011 15:11:38.178863    6289 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:11:38.179111    6289 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubenet-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubenet-204000/disk.qcow2
	I1011 15:11:38.189404    6289 main.go:141] libmachine: STDOUT: 
	I1011 15:11:38.189429    6289 main.go:141] libmachine: STDERR: 
	I1011 15:11:38.189485    6289 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubenet-204000/disk.qcow2 +20000M
	I1011 15:11:38.198276    6289 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:11:38.198292    6289 main.go:141] libmachine: STDERR: 
	I1011 15:11:38.198308    6289 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubenet-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubenet-204000/disk.qcow2
	I1011 15:11:38.198312    6289 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:11:38.198324    6289 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:11:38.198354    6289 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubenet-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubenet-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubenet-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:be:02:3c:bc:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubenet-204000/disk.qcow2
	I1011 15:11:38.200255    6289 main.go:141] libmachine: STDOUT: 
	I1011 15:11:38.200270    6289 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:11:38.200291    6289 client.go:171] duration metric: took 462.63625ms to LocalClient.Create
	I1011 15:11:40.202529    6289 start.go:128] duration metric: took 2.488886375s to createHost
	I1011 15:11:40.202613    6289 start.go:83] releasing machines lock for "kubenet-204000", held for 2.489044792s
	W1011 15:11:40.202681    6289 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:11:40.214953    6289 out.go:177] * Deleting "kubenet-204000" in qemu2 ...
	W1011 15:11:40.241344    6289 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:11:40.241379    6289 start.go:729] Will try again in 5 seconds ...
	I1011 15:11:45.243538    6289 start.go:360] acquireMachinesLock for kubenet-204000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:11:45.244167    6289 start.go:364] duration metric: took 506.041µs to acquireMachinesLock for "kubenet-204000"
	I1011 15:11:45.244313    6289 start.go:93] Provisioning new machine with config: &{Name:kubenet-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:11:45.244700    6289 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:11:45.250404    6289 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1011 15:11:45.293908    6289 start.go:159] libmachine.API.Create for "kubenet-204000" (driver="qemu2")
	I1011 15:11:45.293973    6289 client.go:168] LocalClient.Create starting
	I1011 15:11:45.294131    6289 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:11:45.294223    6289 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:45.294242    6289 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:45.294309    6289 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:11:45.294366    6289 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:45.294379    6289 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:45.295178    6289 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:11:45.456744    6289 main.go:141] libmachine: Creating SSH key...
	I1011 15:11:45.627699    6289 main.go:141] libmachine: Creating Disk image...
	I1011 15:11:45.627712    6289 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:11:45.627981    6289 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubenet-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubenet-204000/disk.qcow2
	I1011 15:11:45.638353    6289 main.go:141] libmachine: STDOUT: 
	I1011 15:11:45.638377    6289 main.go:141] libmachine: STDERR: 
	I1011 15:11:45.638454    6289 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubenet-204000/disk.qcow2 +20000M
	I1011 15:11:45.646942    6289 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:11:45.646962    6289 main.go:141] libmachine: STDERR: 
	I1011 15:11:45.646974    6289 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubenet-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubenet-204000/disk.qcow2
	I1011 15:11:45.646983    6289 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:11:45.646991    6289 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:11:45.647026    6289 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubenet-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubenet-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubenet-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:59:f9:73:26:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/kubenet-204000/disk.qcow2
	I1011 15:11:45.648912    6289 main.go:141] libmachine: STDOUT: 
	I1011 15:11:45.648932    6289 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:11:45.648946    6289 client.go:171] duration metric: took 354.97375ms to LocalClient.Create
	I1011 15:11:47.651020    6289 start.go:128] duration metric: took 2.406335792s to createHost
	I1011 15:11:47.651052    6289 start.go:83] releasing machines lock for "kubenet-204000", held for 2.40690225s
	W1011 15:11:47.651171    6289 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:11:47.659423    6289 out.go:201] 
	W1011 15:11:47.664362    6289 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:11:47.664368    6289 out.go:270] * 
	* 
	W1011 15:11:47.664877    6289 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:11:47.674404    6289 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (10.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-627000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-627000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.856650458s)

                                                
                                                
-- stdout --
	* [old-k8s-version-627000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-627000" primary control-plane node in "old-k8s-version-627000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-627000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:11:50.004993    6400 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:11:50.005143    6400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:11:50.005146    6400 out.go:358] Setting ErrFile to fd 2...
	I1011 15:11:50.005149    6400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:11:50.005289    6400 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:11:50.006474    6400 out.go:352] Setting JSON to false
	I1011 15:11:50.024408    6400 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6080,"bootTime":1728678630,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:11:50.024475    6400 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:11:50.029714    6400 out.go:177] * [old-k8s-version-627000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:11:50.037609    6400 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:11:50.037641    6400 notify.go:220] Checking for updates...
	I1011 15:11:50.042538    6400 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:11:50.045592    6400 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:11:50.048538    6400 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:11:50.051552    6400 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:11:50.054583    6400 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:11:50.056207    6400 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:11:50.056296    6400 config.go:182] Loaded profile config "stopped-upgrade-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1011 15:11:50.056351    6400 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:11:50.060524    6400 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 15:11:50.067385    6400 start.go:297] selected driver: qemu2
	I1011 15:11:50.067392    6400 start.go:901] validating driver "qemu2" against <nil>
	I1011 15:11:50.067400    6400 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:11:50.069803    6400 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 15:11:50.073533    6400 out.go:177] * Automatically selected the socket_vmnet network
	I1011 15:11:50.076647    6400 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 15:11:50.076671    6400 cni.go:84] Creating CNI manager for ""
	I1011 15:11:50.076692    6400 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1011 15:11:50.076712    6400 start.go:340] cluster config:
	{Name:old-k8s-version-627000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-627000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:11:50.081294    6400 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:11:50.089623    6400 out.go:177] * Starting "old-k8s-version-627000" primary control-plane node in "old-k8s-version-627000" cluster
	I1011 15:11:50.093488    6400 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1011 15:11:50.093514    6400 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1011 15:11:50.093520    6400 cache.go:56] Caching tarball of preloaded images
	I1011 15:11:50.093594    6400 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:11:50.093599    6400 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1011 15:11:50.093660    6400 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/old-k8s-version-627000/config.json ...
	I1011 15:11:50.093671    6400 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/old-k8s-version-627000/config.json: {Name:mka42d7ec13498823f505bd57671a0005d9fc5bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:11:50.093907    6400 start.go:360] acquireMachinesLock for old-k8s-version-627000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:11:50.093952    6400 start.go:364] duration metric: took 39.042µs to acquireMachinesLock for "old-k8s-version-627000"
	I1011 15:11:50.093965    6400 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-627000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-627000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:11:50.094000    6400 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:11:50.101559    6400 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 15:11:50.117901    6400 start.go:159] libmachine.API.Create for "old-k8s-version-627000" (driver="qemu2")
	I1011 15:11:50.117928    6400 client.go:168] LocalClient.Create starting
	I1011 15:11:50.118002    6400 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:11:50.118038    6400 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:50.118049    6400 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:50.118081    6400 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:11:50.118110    6400 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:50.118116    6400 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:50.118581    6400 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:11:50.274664    6400 main.go:141] libmachine: Creating SSH key...
	I1011 15:11:50.407999    6400 main.go:141] libmachine: Creating Disk image...
	I1011 15:11:50.408006    6400 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:11:50.408232    6400 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/disk.qcow2
	I1011 15:11:50.418620    6400 main.go:141] libmachine: STDOUT: 
	I1011 15:11:50.418637    6400 main.go:141] libmachine: STDERR: 
	I1011 15:11:50.418702    6400 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/disk.qcow2 +20000M
	I1011 15:11:50.427209    6400 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:11:50.427231    6400 main.go:141] libmachine: STDERR: 
	I1011 15:11:50.427248    6400 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/disk.qcow2
	I1011 15:11:50.427253    6400 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:11:50.427264    6400 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:11:50.427298    6400 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:3f:d1:1f:d4:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/disk.qcow2
	I1011 15:11:50.429156    6400 main.go:141] libmachine: STDOUT: 
	I1011 15:11:50.429171    6400 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:11:50.429199    6400 client.go:171] duration metric: took 311.270333ms to LocalClient.Create
	I1011 15:11:52.431384    6400 start.go:128] duration metric: took 2.337386958s to createHost
	I1011 15:11:52.431472    6400 start.go:83] releasing machines lock for "old-k8s-version-627000", held for 2.337547625s
	W1011 15:11:52.431518    6400 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:11:52.444774    6400 out.go:177] * Deleting "old-k8s-version-627000" in qemu2 ...
	W1011 15:11:52.471884    6400 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:11:52.471920    6400 start.go:729] Will try again in 5 seconds ...
	I1011 15:11:57.474096    6400 start.go:360] acquireMachinesLock for old-k8s-version-627000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:11:57.474712    6400 start.go:364] duration metric: took 513µs to acquireMachinesLock for "old-k8s-version-627000"
	I1011 15:11:57.474858    6400 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-627000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-627000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:11:57.475078    6400 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:11:57.483936    6400 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 15:11:57.529488    6400 start.go:159] libmachine.API.Create for "old-k8s-version-627000" (driver="qemu2")
	I1011 15:11:57.529542    6400 client.go:168] LocalClient.Create starting
	I1011 15:11:57.529706    6400 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:11:57.529784    6400 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:57.529800    6400 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:57.529876    6400 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:11:57.529938    6400 main.go:141] libmachine: Decoding PEM data...
	I1011 15:11:57.529950    6400 main.go:141] libmachine: Parsing certificate...
	I1011 15:11:57.530534    6400 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:11:57.696562    6400 main.go:141] libmachine: Creating SSH key...
	I1011 15:11:57.766549    6400 main.go:141] libmachine: Creating Disk image...
	I1011 15:11:57.766557    6400 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:11:57.766801    6400 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/disk.qcow2
	I1011 15:11:57.777054    6400 main.go:141] libmachine: STDOUT: 
	I1011 15:11:57.777070    6400 main.go:141] libmachine: STDERR: 
	I1011 15:11:57.777124    6400 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/disk.qcow2 +20000M
	I1011 15:11:57.785582    6400 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:11:57.785607    6400 main.go:141] libmachine: STDERR: 
	I1011 15:11:57.785619    6400 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/disk.qcow2
	I1011 15:11:57.785623    6400 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:11:57.785634    6400 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:11:57.785665    6400 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:93:1a:4f:43:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/disk.qcow2
	I1011 15:11:57.787511    6400 main.go:141] libmachine: STDOUT: 
	I1011 15:11:57.787526    6400 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:11:57.787539    6400 client.go:171] duration metric: took 257.993875ms to LocalClient.Create
	I1011 15:11:59.789720    6400 start.go:128] duration metric: took 2.314590042s to createHost
	I1011 15:11:59.789844    6400 start.go:83] releasing machines lock for "old-k8s-version-627000", held for 2.3151285s
	W1011 15:11:59.790283    6400 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-627000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-627000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:11:59.799915    6400 out.go:201] 
	W1011 15:11:59.804963    6400 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:11:59.804984    6400 out.go:270] * 
	* 
	W1011 15:11:59.806935    6400 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:11:59.815765    6400 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-627000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000: exit status 7 (65.01975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-627000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-627000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-627000 create -f testdata/busybox.yaml: exit status 1 (29.086ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-627000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-627000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000: exit status 7 (34.878167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-627000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000: exit status 7 (34.074959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-627000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-627000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-627000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-627000 describe deploy/metrics-server -n kube-system: exit status 1 (27.502584ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-627000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-627000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000: exit status 7 (33.566333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-627000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-627000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-627000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.1980045s)

                                                
                                                
-- stdout --
	* [old-k8s-version-627000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-627000" primary control-plane node in "old-k8s-version-627000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-627000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-627000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:12:04.330213    6459 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:12:04.330372    6459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:04.330375    6459 out.go:358] Setting ErrFile to fd 2...
	I1011 15:12:04.330378    6459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:04.330505    6459 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:12:04.331641    6459 out.go:352] Setting JSON to false
	I1011 15:12:04.349833    6459 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6094,"bootTime":1728678630,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:12:04.349907    6459 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:12:04.355183    6459 out.go:177] * [old-k8s-version-627000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:12:04.362197    6459 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:12:04.362252    6459 notify.go:220] Checking for updates...
	I1011 15:12:04.370106    6459 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:12:04.371341    6459 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:12:04.374110    6459 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:12:04.377125    6459 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:12:04.380129    6459 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:12:04.383453    6459 config.go:182] Loaded profile config "old-k8s-version-627000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1011 15:12:04.387066    6459 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1011 15:12:04.390089    6459 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:12:04.394062    6459 out.go:177] * Using the qemu2 driver based on existing profile
	I1011 15:12:04.401153    6459 start.go:297] selected driver: qemu2
	I1011 15:12:04.401159    6459 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-627000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-627000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:12:04.401204    6459 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:12:04.403692    6459 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 15:12:04.403717    6459 cni.go:84] Creating CNI manager for ""
	I1011 15:12:04.403744    6459 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1011 15:12:04.403766    6459 start.go:340] cluster config:
	{Name:old-k8s-version-627000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-627000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:12:04.407988    6459 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:04.416079    6459 out.go:177] * Starting "old-k8s-version-627000" primary control-plane node in "old-k8s-version-627000" cluster
	I1011 15:12:04.420112    6459 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1011 15:12:04.420125    6459 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1011 15:12:04.420133    6459 cache.go:56] Caching tarball of preloaded images
	I1011 15:12:04.420209    6459 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:12:04.420213    6459 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1011 15:12:04.420261    6459 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/old-k8s-version-627000/config.json ...
	I1011 15:12:04.420566    6459 start.go:360] acquireMachinesLock for old-k8s-version-627000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:12:04.420609    6459 start.go:364] duration metric: took 37.708µs to acquireMachinesLock for "old-k8s-version-627000"
	I1011 15:12:04.420619    6459 start.go:96] Skipping create...Using existing machine configuration
	I1011 15:12:04.420622    6459 fix.go:54] fixHost starting: 
	I1011 15:12:04.420726    6459 fix.go:112] recreateIfNeeded on old-k8s-version-627000: state=Stopped err=<nil>
	W1011 15:12:04.420734    6459 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 15:12:04.424114    6459 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-627000" ...
	I1011 15:12:04.432115    6459 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:12:04.432149    6459 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:93:1a:4f:43:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/disk.qcow2
	I1011 15:12:04.434237    6459 main.go:141] libmachine: STDOUT: 
	I1011 15:12:04.434256    6459 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:12:04.434283    6459 fix.go:56] duration metric: took 13.659833ms for fixHost
	I1011 15:12:04.434286    6459 start.go:83] releasing machines lock for "old-k8s-version-627000", held for 13.672833ms
	W1011 15:12:04.434292    6459 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:12:04.434321    6459 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:04.434325    6459 start.go:729] Will try again in 5 seconds ...
	I1011 15:12:09.436435    6459 start.go:360] acquireMachinesLock for old-k8s-version-627000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:12:09.437050    6459 start.go:364] duration metric: took 522.458µs to acquireMachinesLock for "old-k8s-version-627000"
	I1011 15:12:09.437194    6459 start.go:96] Skipping create...Using existing machine configuration
	I1011 15:12:09.437217    6459 fix.go:54] fixHost starting: 
	I1011 15:12:09.437977    6459 fix.go:112] recreateIfNeeded on old-k8s-version-627000: state=Stopped err=<nil>
	W1011 15:12:09.438003    6459 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 15:12:09.442842    6459 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-627000" ...
	I1011 15:12:09.451605    6459 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:12:09.451870    6459 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:93:1a:4f:43:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/old-k8s-version-627000/disk.qcow2
	I1011 15:12:09.463085    6459 main.go:141] libmachine: STDOUT: 
	I1011 15:12:09.463145    6459 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:12:09.463258    6459 fix.go:56] duration metric: took 26.042584ms for fixHost
	I1011 15:12:09.463277    6459 start.go:83] releasing machines lock for "old-k8s-version-627000", held for 26.201416ms
	W1011 15:12:09.463461    6459 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-627000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-627000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:09.469587    6459 out.go:201] 
	W1011 15:12:09.473613    6459 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:12:09.473646    6459 out.go:270] * 
	* 
	W1011 15:12:09.475672    6459 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:12:09.483547    6459 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-627000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000: exit status 7 (66.057625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-627000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-627000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000: exit status 7 (35.068167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-627000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-627000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-627000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-627000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.850916ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-627000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-627000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000: exit status 7 (34.148708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-627000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-627000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000: exit status 7 (33.311083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-627000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-627000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-627000 --alsologtostderr -v=1: exit status 83 (42.779667ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-627000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-627000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:12:09.772511    6479 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:12:09.773462    6479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:09.773466    6479 out.go:358] Setting ErrFile to fd 2...
	I1011 15:12:09.773469    6479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:09.773617    6479 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:12:09.773836    6479 out.go:352] Setting JSON to false
	I1011 15:12:09.773847    6479 mustload.go:65] Loading cluster: old-k8s-version-627000
	I1011 15:12:09.774073    6479 config.go:182] Loaded profile config "old-k8s-version-627000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1011 15:12:09.775802    6479 out.go:177] * The control-plane node old-k8s-version-627000 host is not running: state=Stopped
	I1011 15:12:09.778850    6479 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-627000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-627000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000: exit status 7 (33.648291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-627000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000: exit status 7 (33.637125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-627000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-785000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-785000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.980109458s)

                                                
                                                
-- stdout --
	* [no-preload-785000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-785000" primary control-plane node in "no-preload-785000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-785000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:12:10.062423    6495 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:12:10.062581    6495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:10.062584    6495 out.go:358] Setting ErrFile to fd 2...
	I1011 15:12:10.062587    6495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:10.062708    6495 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:12:10.064061    6495 out.go:352] Setting JSON to false
	I1011 15:12:10.083272    6495 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6100,"bootTime":1728678630,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:12:10.083341    6495 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:12:10.087577    6495 out.go:177] * [no-preload-785000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:12:10.095646    6495 notify.go:220] Checking for updates...
	I1011 15:12:10.099545    6495 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:12:10.106539    6495 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:12:10.113546    6495 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:12:10.121355    6495 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:12:10.129597    6495 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:12:10.135553    6495 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:12:10.139966    6495 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:12:10.140010    6495 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:12:10.146604    6495 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 15:12:10.152564    6495 start.go:297] selected driver: qemu2
	I1011 15:12:10.152573    6495 start.go:901] validating driver "qemu2" against <nil>
	I1011 15:12:10.152581    6495 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:12:10.155145    6495 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 15:12:10.158556    6495 out.go:177] * Automatically selected the socket_vmnet network
	I1011 15:12:10.161534    6495 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 15:12:10.161557    6495 cni.go:84] Creating CNI manager for ""
	I1011 15:12:10.161584    6495 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 15:12:10.161588    6495 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 15:12:10.161615    6495 start.go:340] cluster config:
	{Name:no-preload-785000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-785000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:12:10.167070    6495 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:10.171561    6495 out.go:177] * Starting "no-preload-785000" primary control-plane node in "no-preload-785000" cluster
	I1011 15:12:10.178613    6495 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 15:12:10.178771    6495 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/no-preload-785000/config.json ...
	I1011 15:12:10.178802    6495 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/no-preload-785000/config.json: {Name:mk510e1d2c2d1c5af42ad1c06f8a0381dfb4ab09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:12:10.178786    6495 cache.go:107] acquiring lock: {Name:mk4458181073552f380e5d174c79ce54460686fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:10.178791    6495 cache.go:107] acquiring lock: {Name:mkb592ae3bbf5e8c6ecbc57d7a56ee51871442e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:10.178838    6495 cache.go:107] acquiring lock: {Name:mk51bebd2b4a75ab89bbf996a053190441197923 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:10.178892    6495 cache.go:107] acquiring lock: {Name:mk6569f98d3e7dafb30718c578c22b35ae0cb709 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:10.178957    6495 cache.go:107] acquiring lock: {Name:mk0a6874db207bf2f2aebea816c951bfdeb51e1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:10.178980    6495 cache.go:107] acquiring lock: {Name:mkf8dbef86a326416a84e4cd8bb104e2e99ed36d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:10.178980    6495 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 15:12:10.178976    6495 cache.go:107] acquiring lock: {Name:mkcfda0f70c995601854d2514526ca8bd9c40153 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:10.179010    6495 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1011 15:12:10.179097    6495 start.go:360] acquireMachinesLock for no-preload-785000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:12:10.179208    6495 cache.go:107] acquiring lock: {Name:mk0c038c97f0c07d7696feb3835e56e44a255946 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:10.179372    6495 start.go:364] duration metric: took 269µs to acquireMachinesLock for "no-preload-785000"
	I1011 15:12:10.179379    6495 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 15:12:10.179383    6495 start.go:93] Provisioning new machine with config: &{Name:no-preload-785000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-785000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:12:10.179425    6495 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:12:10.179495    6495 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 15:12:10.179501    6495 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 15:12:10.179505    6495 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1011 15:12:10.179560    6495 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 15:12:10.179530    6495 cache.go:115] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1011 15:12:10.179582    6495 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 803.833µs
	I1011 15:12:10.179589    6495 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1011 15:12:10.183575    6495 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 15:12:10.191305    6495 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1011 15:12:10.191337    6495 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1011 15:12:10.191385    6495 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1011 15:12:10.191449    6495 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1011 15:12:10.191860    6495 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1011 15:12:10.192206    6495 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1011 15:12:10.193633    6495 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1011 15:12:10.199766    6495 start.go:159] libmachine.API.Create for "no-preload-785000" (driver="qemu2")
	I1011 15:12:10.199791    6495 client.go:168] LocalClient.Create starting
	I1011 15:12:10.199864    6495 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:12:10.199901    6495 main.go:141] libmachine: Decoding PEM data...
	I1011 15:12:10.199915    6495 main.go:141] libmachine: Parsing certificate...
	I1011 15:12:10.199953    6495 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:12:10.199982    6495 main.go:141] libmachine: Decoding PEM data...
	I1011 15:12:10.199989    6495 main.go:141] libmachine: Parsing certificate...
	I1011 15:12:10.200365    6495 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:12:10.495643    6495 main.go:141] libmachine: Creating SSH key...
	I1011 15:12:10.579951    6495 main.go:141] libmachine: Creating Disk image...
	I1011 15:12:10.579969    6495 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:12:10.580184    6495 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/disk.qcow2
	I1011 15:12:10.590253    6495 main.go:141] libmachine: STDOUT: 
	I1011 15:12:10.590267    6495 main.go:141] libmachine: STDERR: 
	I1011 15:12:10.590331    6495 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/disk.qcow2 +20000M
	I1011 15:12:10.600301    6495 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:12:10.600321    6495 main.go:141] libmachine: STDERR: 
	I1011 15:12:10.600340    6495 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/disk.qcow2
	I1011 15:12:10.600345    6495 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:12:10.600358    6495 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:12:10.600383    6495 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:87:b8:58:e5:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/disk.qcow2
	I1011 15:12:10.602540    6495 main.go:141] libmachine: STDOUT: 
	I1011 15:12:10.602553    6495 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:12:10.602574    6495 client.go:171] duration metric: took 402.78375ms to LocalClient.Create
	I1011 15:12:10.616659    6495 cache.go:162] opening:  /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I1011 15:12:10.629542    6495 cache.go:162] opening:  /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1011 15:12:10.682801    6495 cache.go:162] opening:  /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I1011 15:12:10.737831    6495 cache.go:162] opening:  /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1011 15:12:10.815279    6495 cache.go:162] opening:  /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I1011 15:12:10.886833    6495 cache.go:157] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1011 15:12:10.886858    6495 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 707.978083ms
	I1011 15:12:10.886875    6495 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1011 15:12:10.890012    6495 cache.go:162] opening:  /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1011 15:12:10.918071    6495 cache.go:162] opening:  /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1011 15:12:12.602794    6495 start.go:128] duration metric: took 2.423381792s to createHost
	I1011 15:12:12.602855    6495 start.go:83] releasing machines lock for "no-preload-785000", held for 2.423511917s
	W1011 15:12:12.602916    6495 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:12.627478    6495 out.go:177] * Deleting "no-preload-785000" in qemu2 ...
	W1011 15:12:12.650853    6495 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:12.650879    6495 start.go:729] Will try again in 5 seconds ...
	I1011 15:12:13.820789    6495 cache.go:157] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1011 15:12:13.820863    6495 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.642000584s
	I1011 15:12:13.820893    6495 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1011 15:12:14.049715    6495 cache.go:157] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1011 15:12:14.049770    6495 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 3.871045458s
	I1011 15:12:14.049794    6495 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1011 15:12:14.253969    6495 cache.go:157] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1011 15:12:14.254071    6495 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 4.075274292s
	I1011 15:12:14.254108    6495 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1011 15:12:14.398693    6495 cache.go:157] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1011 15:12:14.398738    6495 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 4.21971075s
	I1011 15:12:14.398761    6495 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1011 15:12:15.699384    6495 cache.go:157] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1011 15:12:15.699443    6495 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 5.520544542s
	I1011 15:12:15.699473    6495 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1011 15:12:17.651049    6495 start.go:360] acquireMachinesLock for no-preload-785000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:12:17.651670    6495 start.go:364] duration metric: took 529.459µs to acquireMachinesLock for "no-preload-785000"
	I1011 15:12:17.651816    6495 start.go:93] Provisioning new machine with config: &{Name:no-preload-785000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-785000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:12:17.652052    6495 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:12:17.662588    6495 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 15:12:17.712176    6495 start.go:159] libmachine.API.Create for "no-preload-785000" (driver="qemu2")
	I1011 15:12:17.712226    6495 client.go:168] LocalClient.Create starting
	I1011 15:12:17.712356    6495 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:12:17.712434    6495 main.go:141] libmachine: Decoding PEM data...
	I1011 15:12:17.712457    6495 main.go:141] libmachine: Parsing certificate...
	I1011 15:12:17.712536    6495 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:12:17.712596    6495 main.go:141] libmachine: Decoding PEM data...
	I1011 15:12:17.712613    6495 main.go:141] libmachine: Parsing certificate...
	I1011 15:12:17.713167    6495 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:12:17.879958    6495 main.go:141] libmachine: Creating SSH key...
	I1011 15:12:17.942815    6495 main.go:141] libmachine: Creating Disk image...
	I1011 15:12:17.942821    6495 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:12:17.943035    6495 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/disk.qcow2
	I1011 15:12:17.953019    6495 main.go:141] libmachine: STDOUT: 
	I1011 15:12:17.953033    6495 main.go:141] libmachine: STDERR: 
	I1011 15:12:17.953091    6495 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/disk.qcow2 +20000M
	I1011 15:12:17.961669    6495 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:12:17.961685    6495 main.go:141] libmachine: STDERR: 
	I1011 15:12:17.961695    6495 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/disk.qcow2
	I1011 15:12:17.961702    6495 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:12:17.961713    6495 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:12:17.961755    6495 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:ad:e4:65:39:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/disk.qcow2
	I1011 15:12:17.963692    6495 main.go:141] libmachine: STDOUT: 
	I1011 15:12:17.963706    6495 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:12:17.963718    6495 client.go:171] duration metric: took 251.490125ms to LocalClient.Create
	I1011 15:12:19.893040    6495 cache.go:157] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1011 15:12:19.893114    6495 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 9.71442675s
	I1011 15:12:19.893165    6495 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1011 15:12:19.893213    6495 cache.go:87] Successfully saved all images to host disk.
	I1011 15:12:19.964084    6495 start.go:128] duration metric: took 2.312046917s to createHost
	I1011 15:12:19.964127    6495 start.go:83] releasing machines lock for "no-preload-785000", held for 2.312470709s
	W1011 15:12:19.964482    6495 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-785000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-785000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:19.977926    6495 out.go:201] 
	W1011 15:12:19.981083    6495 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:12:19.981107    6495 out.go:270] * 
	* 
	W1011 15:12:19.984293    6495 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:12:19.997013    6495 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-785000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000: exit status 7 (65.481709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-785000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-616000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-616000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (12.307848667s)

                                                
                                                
-- stdout --
	* [embed-certs-616000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-616000" primary control-plane node in "embed-certs-616000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-616000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:12:10.175660    6503 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:12:10.178645    6503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:10.178649    6503 out.go:358] Setting ErrFile to fd 2...
	I1011 15:12:10.178651    6503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:10.178835    6503 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:12:10.184007    6503 out.go:352] Setting JSON to false
	I1011 15:12:10.205048    6503 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6100,"bootTime":1728678630,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:12:10.205141    6503 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:12:10.209599    6503 out.go:177] * [embed-certs-616000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:12:10.214646    6503 notify.go:220] Checking for updates...
	I1011 15:12:10.218573    6503 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:12:10.224514    6503 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:12:10.240548    6503 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:12:10.248448    6503 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:12:10.255529    6503 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:12:10.263539    6503 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:12:10.268036    6503 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:12:10.268106    6503 config.go:182] Loaded profile config "no-preload-785000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:12:10.268160    6503 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:12:10.274623    6503 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 15:12:10.282556    6503 start.go:297] selected driver: qemu2
	I1011 15:12:10.282565    6503 start.go:901] validating driver "qemu2" against <nil>
	I1011 15:12:10.282577    6503 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:12:10.285216    6503 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 15:12:10.290679    6503 out.go:177] * Automatically selected the socket_vmnet network
	I1011 15:12:10.294619    6503 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 15:12:10.294641    6503 cni.go:84] Creating CNI manager for ""
	I1011 15:12:10.294676    6503 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 15:12:10.294682    6503 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 15:12:10.294722    6503 start.go:340] cluster config:
	{Name:embed-certs-616000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:12:10.299517    6503 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:10.347658    6503 out.go:177] * Starting "embed-certs-616000" primary control-plane node in "embed-certs-616000" cluster
	I1011 15:12:10.351625    6503 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 15:12:10.351661    6503 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 15:12:10.351674    6503 cache.go:56] Caching tarball of preloaded images
	I1011 15:12:10.351768    6503 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:12:10.351774    6503 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 15:12:10.351843    6503 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/embed-certs-616000/config.json ...
	I1011 15:12:10.351855    6503 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/embed-certs-616000/config.json: {Name:mkae93e3c8663d2492ff92ad11820ed75350c248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:12:10.352119    6503 start.go:360] acquireMachinesLock for embed-certs-616000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:12:12.603040    6503 start.go:364] duration metric: took 2.250893333s to acquireMachinesLock for "embed-certs-616000"
	I1011 15:12:12.603152    6503 start.go:93] Provisioning new machine with config: &{Name:embed-certs-616000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:12:12.603339    6503 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:12:12.617013    6503 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 15:12:12.667526    6503 start.go:159] libmachine.API.Create for "embed-certs-616000" (driver="qemu2")
	I1011 15:12:12.667578    6503 client.go:168] LocalClient.Create starting
	I1011 15:12:12.667722    6503 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:12:12.667810    6503 main.go:141] libmachine: Decoding PEM data...
	I1011 15:12:12.667833    6503 main.go:141] libmachine: Parsing certificate...
	I1011 15:12:12.667904    6503 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:12:12.667962    6503 main.go:141] libmachine: Decoding PEM data...
	I1011 15:12:12.667978    6503 main.go:141] libmachine: Parsing certificate...
	I1011 15:12:12.668592    6503 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:12:12.838253    6503 main.go:141] libmachine: Creating SSH key...
	I1011 15:12:12.909168    6503 main.go:141] libmachine: Creating Disk image...
	I1011 15:12:12.909176    6503 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:12:12.909447    6503 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/disk.qcow2
	I1011 15:12:12.919643    6503 main.go:141] libmachine: STDOUT: 
	I1011 15:12:12.919662    6503 main.go:141] libmachine: STDERR: 
	I1011 15:12:12.919733    6503 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/disk.qcow2 +20000M
	I1011 15:12:12.928613    6503 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:12:12.928644    6503 main.go:141] libmachine: STDERR: 
	I1011 15:12:12.928663    6503 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/disk.qcow2
	I1011 15:12:12.928669    6503 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:12:12.928680    6503 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:12:12.928712    6503 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:c9:68:3f:05:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/disk.qcow2
	I1011 15:12:12.930638    6503 main.go:141] libmachine: STDOUT: 
	I1011 15:12:12.930652    6503 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:12:12.930671    6503 client.go:171] duration metric: took 263.09ms to LocalClient.Create
	I1011 15:12:14.932855    6503 start.go:128] duration metric: took 2.329484458s to createHost
	I1011 15:12:14.932923    6503 start.go:83] releasing machines lock for "embed-certs-616000", held for 2.329883292s
	W1011 15:12:14.932988    6503 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:14.945365    6503 out.go:177] * Deleting "embed-certs-616000" in qemu2 ...
	W1011 15:12:14.980605    6503 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:14.980636    6503 start.go:729] Will try again in 5 seconds ...
	I1011 15:12:19.981403    6503 start.go:360] acquireMachinesLock for embed-certs-616000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:12:19.981778    6503 start.go:364] duration metric: took 306.791µs to acquireMachinesLock for "embed-certs-616000"
	I1011 15:12:19.981910    6503 start.go:93] Provisioning new machine with config: &{Name:embed-certs-616000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:12:19.982171    6503 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:12:19.993998    6503 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 15:12:20.043489    6503 start.go:159] libmachine.API.Create for "embed-certs-616000" (driver="qemu2")
	I1011 15:12:20.043543    6503 client.go:168] LocalClient.Create starting
	I1011 15:12:20.043626    6503 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:12:20.043689    6503 main.go:141] libmachine: Decoding PEM data...
	I1011 15:12:20.043705    6503 main.go:141] libmachine: Parsing certificate...
	I1011 15:12:20.043772    6503 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:12:20.043803    6503 main.go:141] libmachine: Decoding PEM data...
	I1011 15:12:20.043818    6503 main.go:141] libmachine: Parsing certificate...
	I1011 15:12:20.044354    6503 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:12:20.221204    6503 main.go:141] libmachine: Creating SSH key...
	I1011 15:12:20.388158    6503 main.go:141] libmachine: Creating Disk image...
	I1011 15:12:20.388165    6503 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:12:20.388342    6503 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/disk.qcow2
	I1011 15:12:20.397967    6503 main.go:141] libmachine: STDOUT: 
	I1011 15:12:20.397997    6503 main.go:141] libmachine: STDERR: 
	I1011 15:12:20.398056    6503 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/disk.qcow2 +20000M
	I1011 15:12:20.406868    6503 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:12:20.406889    6503 main.go:141] libmachine: STDERR: 
	I1011 15:12:20.406903    6503 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/disk.qcow2
	I1011 15:12:20.406909    6503 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:12:20.406920    6503 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:12:20.406945    6503 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:75:f3:ad:f4:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/disk.qcow2
	I1011 15:12:20.408956    6503 main.go:141] libmachine: STDOUT: 
	I1011 15:12:20.408971    6503 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:12:20.408983    6503 client.go:171] duration metric: took 365.441458ms to LocalClient.Create
	I1011 15:12:22.411141    6503 start.go:128] duration metric: took 2.428971541s to createHost
	I1011 15:12:22.411201    6503 start.go:83] releasing machines lock for "embed-certs-616000", held for 2.429440375s
	W1011 15:12:22.411625    6503 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-616000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-616000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:22.420296    6503 out.go:201] 
	W1011 15:12:22.424400    6503 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:12:22.424425    6503 out.go:270] * 
	* 
	W1011 15:12:22.426997    6503 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:12:22.435463    6503 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-616000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (71.343791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-785000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-785000 create -f testdata/busybox.yaml: exit status 1 (32.442584ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-785000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-785000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000: exit status 7 (39.295291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-785000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000: exit status 7 (38.822334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-785000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-785000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-785000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-785000 describe deploy/metrics-server -n kube-system: exit status 1 (28.232958ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-785000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-785000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000: exit status 7 (34.662291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-785000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-616000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-616000 create -f testdata/busybox.yaml: exit status 1 (28.781ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-616000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-616000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (33.053167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (32.62575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-616000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-616000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-616000 describe deploy/metrics-server -n kube-system: exit status 1 (27.277458ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-616000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-616000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (33.373709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-785000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-785000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.191408541s)

                                                
                                                
-- stdout --
	* [no-preload-785000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-785000" primary control-plane node in "no-preload-785000" cluster
	* Restarting existing qemu2 VM for "no-preload-785000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-785000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:12:23.749622    6609 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:12:23.749782    6609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:23.749786    6609 out.go:358] Setting ErrFile to fd 2...
	I1011 15:12:23.749788    6609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:23.749918    6609 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:12:23.750917    6609 out.go:352] Setting JSON to false
	I1011 15:12:23.768464    6609 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6113,"bootTime":1728678630,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:12:23.768534    6609 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:12:23.773426    6609 out.go:177] * [no-preload-785000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:12:23.780634    6609 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:12:23.780678    6609 notify.go:220] Checking for updates...
	I1011 15:12:23.788540    6609 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:12:23.791600    6609 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:12:23.794571    6609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:12:23.797600    6609 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:12:23.800585    6609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:12:23.803848    6609 config.go:182] Loaded profile config "no-preload-785000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:12:23.804110    6609 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:12:23.808528    6609 out.go:177] * Using the qemu2 driver based on existing profile
	I1011 15:12:23.814474    6609 start.go:297] selected driver: qemu2
	I1011 15:12:23.814480    6609 start.go:901] validating driver "qemu2" against &{Name:no-preload-785000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-785000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:12:23.814536    6609 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:12:23.817074    6609 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 15:12:23.817097    6609 cni.go:84] Creating CNI manager for ""
	I1011 15:12:23.817122    6609 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 15:12:23.817145    6609 start.go:340] cluster config:
	{Name:no-preload-785000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-785000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:12:23.821583    6609 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:23.829595    6609 out.go:177] * Starting "no-preload-785000" primary control-plane node in "no-preload-785000" cluster
	I1011 15:12:23.833587    6609 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 15:12:23.833656    6609 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/no-preload-785000/config.json ...
	I1011 15:12:23.833692    6609 cache.go:107] acquiring lock: {Name:mk0c038c97f0c07d7696feb3835e56e44a255946 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:23.833692    6609 cache.go:107] acquiring lock: {Name:mk6569f98d3e7dafb30718c578c22b35ae0cb709 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:23.833714    6609 cache.go:107] acquiring lock: {Name:mk4458181073552f380e5d174c79ce54460686fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:23.833724    6609 cache.go:107] acquiring lock: {Name:mkcfda0f70c995601854d2514526ca8bd9c40153 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:23.833815    6609 cache.go:115] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1011 15:12:23.833811    6609 cache.go:115] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1011 15:12:23.833821    6609 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 108.5µs
	I1011 15:12:23.833823    6609 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 145.833µs
	I1011 15:12:23.833828    6609 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1011 15:12:23.833828    6609 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1011 15:12:23.833819    6609 cache.go:115] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1011 15:12:23.833824    6609 cache.go:107] acquiring lock: {Name:mkf8dbef86a326416a84e4cd8bb104e2e99ed36d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:23.833835    6609 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 111.208µs
	I1011 15:12:23.833838    6609 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1011 15:12:23.833835    6609 cache.go:107] acquiring lock: {Name:mkb592ae3bbf5e8c6ecbc57d7a56ee51871442e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:23.833906    6609 cache.go:115] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1011 15:12:23.833914    6609 cache.go:115] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1011 15:12:23.833915    6609 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 91.875µs
	I1011 15:12:23.833918    6609 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 84µs
	I1011 15:12:23.833909    6609 cache.go:115] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1011 15:12:23.833909    6609 cache.go:107] acquiring lock: {Name:mk51bebd2b4a75ab89bbf996a053190441197923 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:23.833915    6609 cache.go:107] acquiring lock: {Name:mk0a6874db207bf2f2aebea816c951bfdeb51e1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:23.833920    6609 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1011 15:12:23.833922    6609 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1011 15:12:23.833924    6609 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 241.458µs
	I1011 15:12:23.833978    6609 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1011 15:12:23.834022    6609 cache.go:115] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1011 15:12:23.834028    6609 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 153.334µs
	I1011 15:12:23.834035    6609 cache.go:115] /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1011 15:12:23.834036    6609 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1011 15:12:23.834042    6609 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 188.209µs
	I1011 15:12:23.834048    6609 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1011 15:12:23.834051    6609 cache.go:87] Successfully saved all images to host disk.
	I1011 15:12:23.834100    6609 start.go:360] acquireMachinesLock for no-preload-785000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:12:23.834135    6609 start.go:364] duration metric: took 27.708µs to acquireMachinesLock for "no-preload-785000"
	I1011 15:12:23.834145    6609 start.go:96] Skipping create...Using existing machine configuration
	I1011 15:12:23.834149    6609 fix.go:54] fixHost starting: 
	I1011 15:12:23.834271    6609 fix.go:112] recreateIfNeeded on no-preload-785000: state=Stopped err=<nil>
	W1011 15:12:23.834279    6609 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 15:12:23.841504    6609 out.go:177] * Restarting existing qemu2 VM for "no-preload-785000" ...
	I1011 15:12:23.845585    6609 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:12:23.845645    6609 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:ad:e4:65:39:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/disk.qcow2
	I1011 15:12:23.847820    6609 main.go:141] libmachine: STDOUT: 
	I1011 15:12:23.847840    6609 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:12:23.847867    6609 fix.go:56] duration metric: took 13.716292ms for fixHost
	I1011 15:12:23.847872    6609 start.go:83] releasing machines lock for "no-preload-785000", held for 13.733417ms
	W1011 15:12:23.847878    6609 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:12:23.847910    6609 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:23.847914    6609 start.go:729] Will try again in 5 seconds ...
	I1011 15:12:28.850008    6609 start.go:360] acquireMachinesLock for no-preload-785000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:12:28.850358    6609 start.go:364] duration metric: took 273.625µs to acquireMachinesLock for "no-preload-785000"
	I1011 15:12:28.850479    6609 start.go:96] Skipping create...Using existing machine configuration
	I1011 15:12:28.850499    6609 fix.go:54] fixHost starting: 
	I1011 15:12:28.851159    6609 fix.go:112] recreateIfNeeded on no-preload-785000: state=Stopped err=<nil>
	W1011 15:12:28.851183    6609 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 15:12:28.859633    6609 out.go:177] * Restarting existing qemu2 VM for "no-preload-785000" ...
	I1011 15:12:28.863591    6609 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:12:28.863809    6609 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:ad:e4:65:39:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/no-preload-785000/disk.qcow2
	I1011 15:12:28.873527    6609 main.go:141] libmachine: STDOUT: 
	I1011 15:12:28.873588    6609 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:12:28.873693    6609 fix.go:56] duration metric: took 23.194541ms for fixHost
	I1011 15:12:28.873710    6609 start.go:83] releasing machines lock for "no-preload-785000", held for 23.331917ms
	W1011 15:12:28.873902    6609 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-785000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-785000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:28.880656    6609 out.go:201] 
	W1011 15:12:28.884680    6609 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:12:28.884718    6609 out.go:270] * 
	* 
	W1011 15:12:28.887742    6609 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:12:28.895651    6609 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-785000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000: exit status 7 (72.507125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-785000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-616000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-616000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.378345292s)

                                                
                                                
-- stdout --
	* [embed-certs-616000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-616000" primary control-plane node in "embed-certs-616000" cluster
	* Restarting existing qemu2 VM for "embed-certs-616000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-616000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:12:26.741795    6630 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:12:26.741963    6630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:26.741966    6630 out.go:358] Setting ErrFile to fd 2...
	I1011 15:12:26.741969    6630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:26.742098    6630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:12:26.743167    6630 out.go:352] Setting JSON to false
	I1011 15:12:26.760542    6630 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6116,"bootTime":1728678630,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:12:26.760615    6630 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:12:26.766246    6630 out.go:177] * [embed-certs-616000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:12:26.772084    6630 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:12:26.772125    6630 notify.go:220] Checking for updates...
	I1011 15:12:26.780109    6630 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:12:26.784060    6630 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:12:26.787107    6630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:12:26.790074    6630 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:12:26.793005    6630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:12:26.796429    6630 config.go:182] Loaded profile config "embed-certs-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:12:26.796692    6630 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:12:26.800061    6630 out.go:177] * Using the qemu2 driver based on existing profile
	I1011 15:12:26.807017    6630 start.go:297] selected driver: qemu2
	I1011 15:12:26.807023    6630 start.go:901] validating driver "qemu2" against &{Name:embed-certs-616000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:12:26.807072    6630 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:12:26.809622    6630 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 15:12:26.809647    6630 cni.go:84] Creating CNI manager for ""
	I1011 15:12:26.809668    6630 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 15:12:26.809695    6630 start.go:340] cluster config:
	{Name:embed-certs-616000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-616000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:12:26.814078    6630 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:26.820987    6630 out.go:177] * Starting "embed-certs-616000" primary control-plane node in "embed-certs-616000" cluster
	I1011 15:12:26.825076    6630 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 15:12:26.825094    6630 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 15:12:26.825105    6630 cache.go:56] Caching tarball of preloaded images
	I1011 15:12:26.825207    6630 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:12:26.825213    6630 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 15:12:26.825278    6630 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/embed-certs-616000/config.json ...
	I1011 15:12:26.825734    6630 start.go:360] acquireMachinesLock for embed-certs-616000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:12:26.825769    6630 start.go:364] duration metric: took 27.542µs to acquireMachinesLock for "embed-certs-616000"
	I1011 15:12:26.825780    6630 start.go:96] Skipping create...Using existing machine configuration
	I1011 15:12:26.825785    6630 fix.go:54] fixHost starting: 
	I1011 15:12:26.825916    6630 fix.go:112] recreateIfNeeded on embed-certs-616000: state=Stopped err=<nil>
	W1011 15:12:26.825924    6630 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 15:12:26.829100    6630 out.go:177] * Restarting existing qemu2 VM for "embed-certs-616000" ...
	I1011 15:12:26.836065    6630 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:12:26.836114    6630 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:75:f3:ad:f4:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/disk.qcow2
	I1011 15:12:26.838425    6630 main.go:141] libmachine: STDOUT: 
	I1011 15:12:26.838438    6630 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:12:26.838467    6630 fix.go:56] duration metric: took 12.679959ms for fixHost
	I1011 15:12:26.838472    6630 start.go:83] releasing machines lock for "embed-certs-616000", held for 12.69875ms
	W1011 15:12:26.838478    6630 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:12:26.838524    6630 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:26.838528    6630 start.go:729] Will try again in 5 seconds ...
	I1011 15:12:31.840646    6630 start.go:360] acquireMachinesLock for embed-certs-616000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:12:32.009681    6630 start.go:364] duration metric: took 168.924708ms to acquireMachinesLock for "embed-certs-616000"
	I1011 15:12:32.009767    6630 start.go:96] Skipping create...Using existing machine configuration
	I1011 15:12:32.009787    6630 fix.go:54] fixHost starting: 
	I1011 15:12:32.010575    6630 fix.go:112] recreateIfNeeded on embed-certs-616000: state=Stopped err=<nil>
	W1011 15:12:32.010601    6630 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 15:12:32.019642    6630 out.go:177] * Restarting existing qemu2 VM for "embed-certs-616000" ...
	I1011 15:12:32.034683    6630 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:12:32.034926    6630 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:75:f3:ad:f4:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/embed-certs-616000/disk.qcow2
	I1011 15:12:32.046388    6630 main.go:141] libmachine: STDOUT: 
	I1011 15:12:32.046434    6630 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:12:32.046510    6630 fix.go:56] duration metric: took 36.729333ms for fixHost
	I1011 15:12:32.046534    6630 start.go:83] releasing machines lock for "embed-certs-616000", held for 36.803375ms
	W1011 15:12:32.046722    6630 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-616000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-616000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:32.055668    6630 out.go:201] 
	W1011 15:12:32.059879    6630 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:12:32.059903    6630 out.go:270] * 
	* 
	W1011 15:12:32.062014    6630 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:12:32.073711    6630 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-616000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (66.164459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-785000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000: exit status 7 (34.896167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-785000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-785000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-785000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-785000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.9825ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-785000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-785000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000: exit status 7 (33.137125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-785000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-785000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000: exit status 7 (32.384708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-785000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-785000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-785000 --alsologtostderr -v=1: exit status 83 (43.4025ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-785000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-785000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:12:29.187169    6649 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:12:29.187365    6649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:29.187372    6649 out.go:358] Setting ErrFile to fd 2...
	I1011 15:12:29.187374    6649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:29.187496    6649 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:12:29.187716    6649 out.go:352] Setting JSON to false
	I1011 15:12:29.187724    6649 mustload.go:65] Loading cluster: no-preload-785000
	I1011 15:12:29.187963    6649 config.go:182] Loaded profile config "no-preload-785000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:12:29.192508    6649 out.go:177] * The control-plane node no-preload-785000 host is not running: state=Stopped
	I1011 15:12:29.195473    6649 out.go:177]   To start a cluster, run: "minikube start -p no-preload-785000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-785000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000: exit status 7 (33.333417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-785000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000: exit status 7 (32.247125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-785000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-270000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-270000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.832106834s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-270000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-270000" primary control-plane node in "default-k8s-diff-port-270000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-270000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:12:29.629985    6673 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:12:29.630167    6673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:29.630170    6673 out.go:358] Setting ErrFile to fd 2...
	I1011 15:12:29.630177    6673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:29.630289    6673 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:12:29.631480    6673 out.go:352] Setting JSON to false
	I1011 15:12:29.649041    6673 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6119,"bootTime":1728678630,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:12:29.649140    6673 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:12:29.654481    6673 out.go:177] * [default-k8s-diff-port-270000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:12:29.661408    6673 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:12:29.661476    6673 notify.go:220] Checking for updates...
	I1011 15:12:29.668359    6673 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:12:29.672423    6673 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:12:29.675416    6673 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:12:29.678394    6673 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:12:29.681385    6673 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:12:29.684724    6673 config.go:182] Loaded profile config "embed-certs-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:12:29.684790    6673 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:12:29.684836    6673 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:12:29.688313    6673 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 15:12:29.695452    6673 start.go:297] selected driver: qemu2
	I1011 15:12:29.695459    6673 start.go:901] validating driver "qemu2" against <nil>
	I1011 15:12:29.695467    6673 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:12:29.698070    6673 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 15:12:29.702293    6673 out.go:177] * Automatically selected the socket_vmnet network
	I1011 15:12:29.705506    6673 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 15:12:29.705524    6673 cni.go:84] Creating CNI manager for ""
	I1011 15:12:29.705548    6673 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 15:12:29.705560    6673 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 15:12:29.705600    6673 start.go:340] cluster config:
	{Name:default-k8s-diff-port-270000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-270000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:12:29.710292    6673 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:29.718312    6673 out.go:177] * Starting "default-k8s-diff-port-270000" primary control-plane node in "default-k8s-diff-port-270000" cluster
	I1011 15:12:29.722354    6673 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 15:12:29.722371    6673 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 15:12:29.722380    6673 cache.go:56] Caching tarball of preloaded images
	I1011 15:12:29.722462    6673 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:12:29.722475    6673 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 15:12:29.722531    6673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/default-k8s-diff-port-270000/config.json ...
	I1011 15:12:29.722546    6673 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/default-k8s-diff-port-270000/config.json: {Name:mk244405abcc640b055e3c0b745dfeb4aa6b9c06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:12:29.722916    6673 start.go:360] acquireMachinesLock for default-k8s-diff-port-270000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:12:29.722965    6673 start.go:364] duration metric: took 41.667µs to acquireMachinesLock for "default-k8s-diff-port-270000"
	I1011 15:12:29.722978    6673 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-270000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-270000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:12:29.723002    6673 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:12:29.731408    6673 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 15:12:29.748763    6673 start.go:159] libmachine.API.Create for "default-k8s-diff-port-270000" (driver="qemu2")
	I1011 15:12:29.748793    6673 client.go:168] LocalClient.Create starting
	I1011 15:12:29.748866    6673 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:12:29.748902    6673 main.go:141] libmachine: Decoding PEM data...
	I1011 15:12:29.748912    6673 main.go:141] libmachine: Parsing certificate...
	I1011 15:12:29.748950    6673 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:12:29.748979    6673 main.go:141] libmachine: Decoding PEM data...
	I1011 15:12:29.748989    6673 main.go:141] libmachine: Parsing certificate...
	I1011 15:12:29.749362    6673 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:12:29.905750    6673 main.go:141] libmachine: Creating SSH key...
	I1011 15:12:29.986410    6673 main.go:141] libmachine: Creating Disk image...
	I1011 15:12:29.986419    6673 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:12:29.986683    6673 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/disk.qcow2
	I1011 15:12:29.996726    6673 main.go:141] libmachine: STDOUT: 
	I1011 15:12:29.996742    6673 main.go:141] libmachine: STDERR: 
	I1011 15:12:29.996811    6673 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/disk.qcow2 +20000M
	I1011 15:12:30.005373    6673 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:12:30.005387    6673 main.go:141] libmachine: STDERR: 
	I1011 15:12:30.005404    6673 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/disk.qcow2
	I1011 15:12:30.005411    6673 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:12:30.005425    6673 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:12:30.005455    6673 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:98:2c:c0:fe:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/disk.qcow2
	I1011 15:12:30.007309    6673 main.go:141] libmachine: STDOUT: 
	I1011 15:12:30.007323    6673 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:12:30.007349    6673 client.go:171] duration metric: took 258.554167ms to LocalClient.Create
	I1011 15:12:32.009499    6673 start.go:128] duration metric: took 2.286507375s to createHost
	I1011 15:12:32.009539    6673 start.go:83] releasing machines lock for "default-k8s-diff-port-270000", held for 2.286600625s
	W1011 15:12:32.009608    6673 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:32.030791    6673 out.go:177] * Deleting "default-k8s-diff-port-270000" in qemu2 ...
	W1011 15:12:32.075834    6673 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:32.075876    6673 start.go:729] Will try again in 5 seconds ...
	I1011 15:12:37.078069    6673 start.go:360] acquireMachinesLock for default-k8s-diff-port-270000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:12:37.078714    6673 start.go:364] duration metric: took 504.708µs to acquireMachinesLock for "default-k8s-diff-port-270000"
	I1011 15:12:37.078880    6673 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-270000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-270000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:12:37.079201    6673 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:12:37.088938    6673 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 15:12:37.139134    6673 start.go:159] libmachine.API.Create for "default-k8s-diff-port-270000" (driver="qemu2")
	I1011 15:12:37.139185    6673 client.go:168] LocalClient.Create starting
	I1011 15:12:37.139334    6673 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:12:37.139417    6673 main.go:141] libmachine: Decoding PEM data...
	I1011 15:12:37.139436    6673 main.go:141] libmachine: Parsing certificate...
	I1011 15:12:37.139552    6673 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:12:37.139632    6673 main.go:141] libmachine: Decoding PEM data...
	I1011 15:12:37.139647    6673 main.go:141] libmachine: Parsing certificate...
	I1011 15:12:37.140513    6673 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:12:37.314522    6673 main.go:141] libmachine: Creating SSH key...
	I1011 15:12:37.365708    6673 main.go:141] libmachine: Creating Disk image...
	I1011 15:12:37.365714    6673 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:12:37.365931    6673 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/disk.qcow2
	I1011 15:12:37.375805    6673 main.go:141] libmachine: STDOUT: 
	I1011 15:12:37.375826    6673 main.go:141] libmachine: STDERR: 
	I1011 15:12:37.375894    6673 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/disk.qcow2 +20000M
	I1011 15:12:37.384424    6673 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:12:37.384440    6673 main.go:141] libmachine: STDERR: 
	I1011 15:12:37.384451    6673 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/disk.qcow2
	I1011 15:12:37.384457    6673 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:12:37.384469    6673 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:12:37.384491    6673 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:02:b7:64:22:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/disk.qcow2
	I1011 15:12:37.386393    6673 main.go:141] libmachine: STDOUT: 
	I1011 15:12:37.386407    6673 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:12:37.386419    6673 client.go:171] duration metric: took 247.231458ms to LocalClient.Create
	I1011 15:12:39.388609    6673 start.go:128] duration metric: took 2.309376375s to createHost
	I1011 15:12:39.388675    6673 start.go:83] releasing machines lock for "default-k8s-diff-port-270000", held for 2.309972083s
	W1011 15:12:39.389075    6673 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-270000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-270000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:39.398890    6673 out.go:201] 
	W1011 15:12:39.404959    6673 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:12:39.404992    6673 out.go:270] * 
	* 
	W1011 15:12:39.407577    6673 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:12:39.416885    6673 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-270000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000: exit status 7 (68.773584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-270000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-616000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (34.421584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-616000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-616000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-616000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.351375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-616000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-616000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (32.598708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-616000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (33.443125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-616000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-616000 --alsologtostderr -v=1: exit status 83 (44.145334ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-616000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-616000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:12:32.360247    6697 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:12:32.360433    6697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:32.360436    6697 out.go:358] Setting ErrFile to fd 2...
	I1011 15:12:32.360438    6697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:32.360556    6697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:12:32.360774    6697 out.go:352] Setting JSON to false
	I1011 15:12:32.360783    6697 mustload.go:65] Loading cluster: embed-certs-616000
	I1011 15:12:32.361010    6697 config.go:182] Loaded profile config "embed-certs-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:12:32.364629    6697 out.go:177] * The control-plane node embed-certs-616000 host is not running: state=Stopped
	I1011 15:12:32.368612    6697 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-616000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-616000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (32.122292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (33.157125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-876000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-876000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.968870583s)

                                                
                                                
-- stdout --
	* [newest-cni-876000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-876000" primary control-plane node in "newest-cni-876000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-876000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:12:32.698698    6714 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:12:32.698857    6714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:32.698864    6714 out.go:358] Setting ErrFile to fd 2...
	I1011 15:12:32.698866    6714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:32.699014    6714 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:12:32.700125    6714 out.go:352] Setting JSON to false
	I1011 15:12:32.718145    6714 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6122,"bootTime":1728678630,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:12:32.718215    6714 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:12:32.723752    6714 out.go:177] * [newest-cni-876000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:12:32.730692    6714 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:12:32.730752    6714 notify.go:220] Checking for updates...
	I1011 15:12:32.736626    6714 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:12:32.739658    6714 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:12:32.742796    6714 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:12:32.744137    6714 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:12:32.746669    6714 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:12:32.750004    6714 config.go:182] Loaded profile config "default-k8s-diff-port-270000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:12:32.750066    6714 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:12:32.750132    6714 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:12:32.754450    6714 out.go:177] * Using the qemu2 driver based on user configuration
	I1011 15:12:32.761684    6714 start.go:297] selected driver: qemu2
	I1011 15:12:32.761690    6714 start.go:901] validating driver "qemu2" against <nil>
	I1011 15:12:32.761696    6714 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:12:32.764082    6714 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1011 15:12:32.764115    6714 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1011 15:12:32.767498    6714 out.go:177] * Automatically selected the socket_vmnet network
	I1011 15:12:32.774694    6714 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1011 15:12:32.774710    6714 cni.go:84] Creating CNI manager for ""
	I1011 15:12:32.774733    6714 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 15:12:32.774737    6714 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 15:12:32.774766    6714 start.go:340] cluster config:
	{Name:newest-cni-876000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-876000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:12:32.779389    6714 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:32.787654    6714 out.go:177] * Starting "newest-cni-876000" primary control-plane node in "newest-cni-876000" cluster
	I1011 15:12:32.791686    6714 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 15:12:32.791703    6714 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 15:12:32.791712    6714 cache.go:56] Caching tarball of preloaded images
	I1011 15:12:32.791799    6714 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:12:32.791805    6714 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 15:12:32.791858    6714 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/newest-cni-876000/config.json ...
	I1011 15:12:32.791875    6714 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/newest-cni-876000/config.json: {Name:mk8ab3011a89f5ed9954be89340fba58b7430d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 15:12:32.792250    6714 start.go:360] acquireMachinesLock for newest-cni-876000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:12:32.792304    6714 start.go:364] duration metric: took 48.166µs to acquireMachinesLock for "newest-cni-876000"
	I1011 15:12:32.792317    6714 start.go:93] Provisioning new machine with config: &{Name:newest-cni-876000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-876000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:12:32.792347    6714 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:12:32.800642    6714 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 15:12:32.818979    6714 start.go:159] libmachine.API.Create for "newest-cni-876000" (driver="qemu2")
	I1011 15:12:32.819004    6714 client.go:168] LocalClient.Create starting
	I1011 15:12:32.819078    6714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:12:32.819116    6714 main.go:141] libmachine: Decoding PEM data...
	I1011 15:12:32.819131    6714 main.go:141] libmachine: Parsing certificate...
	I1011 15:12:32.819168    6714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:12:32.819198    6714 main.go:141] libmachine: Decoding PEM data...
	I1011 15:12:32.819207    6714 main.go:141] libmachine: Parsing certificate...
	I1011 15:12:32.819641    6714 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:12:32.976565    6714 main.go:141] libmachine: Creating SSH key...
	I1011 15:12:33.025217    6714 main.go:141] libmachine: Creating Disk image...
	I1011 15:12:33.025222    6714 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:12:33.025441    6714 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/disk.qcow2
	I1011 15:12:33.035279    6714 main.go:141] libmachine: STDOUT: 
	I1011 15:12:33.035302    6714 main.go:141] libmachine: STDERR: 
	I1011 15:12:33.035363    6714 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/disk.qcow2 +20000M
	I1011 15:12:33.043811    6714 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:12:33.043831    6714 main.go:141] libmachine: STDERR: 
	I1011 15:12:33.043850    6714 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/disk.qcow2
	I1011 15:12:33.043856    6714 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:12:33.043868    6714 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:12:33.043903    6714 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:13:ab:b4:de:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/disk.qcow2
	I1011 15:12:33.045695    6714 main.go:141] libmachine: STDOUT: 
	I1011 15:12:33.045709    6714 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:12:33.045736    6714 client.go:171] duration metric: took 226.728667ms to LocalClient.Create
	I1011 15:12:35.047882    6714 start.go:128] duration metric: took 2.255549792s to createHost
	I1011 15:12:35.047993    6714 start.go:83] releasing machines lock for "newest-cni-876000", held for 2.255713583s
	W1011 15:12:35.048047    6714 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:35.062179    6714 out.go:177] * Deleting "newest-cni-876000" in qemu2 ...
	W1011 15:12:35.086919    6714 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:35.086945    6714 start.go:729] Will try again in 5 seconds ...
	I1011 15:12:40.089048    6714 start.go:360] acquireMachinesLock for newest-cni-876000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:12:40.089460    6714 start.go:364] duration metric: took 335.083µs to acquireMachinesLock for "newest-cni-876000"
	I1011 15:12:40.089603    6714 start.go:93] Provisioning new machine with config: &{Name:newest-cni-876000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-876000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 15:12:40.089890    6714 start.go:125] createHost starting for "" (driver="qemu2")
	I1011 15:12:40.098649    6714 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1011 15:12:40.146503    6714 start.go:159] libmachine.API.Create for "newest-cni-876000" (driver="qemu2")
	I1011 15:12:40.146551    6714 client.go:168] LocalClient.Create starting
	I1011 15:12:40.146689    6714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/ca.pem
	I1011 15:12:40.146754    6714 main.go:141] libmachine: Decoding PEM data...
	I1011 15:12:40.146771    6714 main.go:141] libmachine: Parsing certificate...
	I1011 15:12:40.146855    6714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19749-1186/.minikube/certs/cert.pem
	I1011 15:12:40.146888    6714 main.go:141] libmachine: Decoding PEM data...
	I1011 15:12:40.146901    6714 main.go:141] libmachine: Parsing certificate...
	I1011 15:12:40.147498    6714 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1011 15:12:40.315110    6714 main.go:141] libmachine: Creating SSH key...
	I1011 15:12:40.567131    6714 main.go:141] libmachine: Creating Disk image...
	I1011 15:12:40.567142    6714 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1011 15:12:40.567458    6714 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/disk.qcow2.raw /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/disk.qcow2
	I1011 15:12:40.578053    6714 main.go:141] libmachine: STDOUT: 
	I1011 15:12:40.578078    6714 main.go:141] libmachine: STDERR: 
	I1011 15:12:40.578138    6714 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/disk.qcow2 +20000M
	I1011 15:12:40.586636    6714 main.go:141] libmachine: STDOUT: Image resized.
	
	I1011 15:12:40.586654    6714 main.go:141] libmachine: STDERR: 
	I1011 15:12:40.586666    6714 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/disk.qcow2
	I1011 15:12:40.586670    6714 main.go:141] libmachine: Starting QEMU VM...
	I1011 15:12:40.586678    6714 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:12:40.586713    6714 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:8a:5b:06:c6:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/disk.qcow2
	I1011 15:12:40.588515    6714 main.go:141] libmachine: STDOUT: 
	I1011 15:12:40.588530    6714 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:12:40.588543    6714 client.go:171] duration metric: took 441.993125ms to LocalClient.Create
	I1011 15:12:42.590695    6714 start.go:128] duration metric: took 2.500801875s to createHost
	I1011 15:12:42.590851    6714 start.go:83] releasing machines lock for "newest-cni-876000", held for 2.501357084s
	W1011 15:12:42.591301    6714 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-876000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-876000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:42.604913    6714 out.go:201] 
	W1011 15:12:42.609073    6714 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:12:42.609098    6714 out.go:270] * 
	* 
	W1011 15:12:42.611541    6714 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:12:42.620931    6714 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-876000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-876000 -n newest-cni-876000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-876000 -n newest-cni-876000: exit status 7 (69.73475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-876000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-270000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-270000 create -f testdata/busybox.yaml: exit status 1 (28.8355ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-270000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-270000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000: exit status 7 (33.354125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-270000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000: exit status 7 (32.432583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-270000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-270000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-270000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-270000 describe deploy/metrics-server -n kube-system: exit status 1 (26.933375ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-270000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-270000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000: exit status 7 (33.156333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-270000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-270000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-270000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.191601583s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-270000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-270000" primary control-plane node in "default-k8s-diff-port-270000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-270000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-270000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:12:43.440278    6781 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:12:43.440464    6781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:43.440467    6781 out.go:358] Setting ErrFile to fd 2...
	I1011 15:12:43.440469    6781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:43.440627    6781 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:12:43.441721    6781 out.go:352] Setting JSON to false
	I1011 15:12:43.459186    6781 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6133,"bootTime":1728678630,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:12:43.459261    6781 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:12:43.464201    6781 out.go:177] * [default-k8s-diff-port-270000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:12:43.471081    6781 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:12:43.471106    6781 notify.go:220] Checking for updates...
	I1011 15:12:43.478003    6781 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:12:43.481126    6781 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:12:43.484112    6781 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:12:43.487040    6781 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:12:43.490089    6781 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:12:43.493479    6781 config.go:182] Loaded profile config "default-k8s-diff-port-270000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:12:43.493760    6781 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:12:43.497065    6781 out.go:177] * Using the qemu2 driver based on existing profile
	I1011 15:12:43.504107    6781 start.go:297] selected driver: qemu2
	I1011 15:12:43.504114    6781 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-270000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-270000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:12:43.504210    6781 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:12:43.506741    6781 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 15:12:43.506768    6781 cni.go:84] Creating CNI manager for ""
	I1011 15:12:43.506794    6781 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 15:12:43.506822    6781 start.go:340] cluster config:
	{Name:default-k8s-diff-port-270000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-270000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:12:43.511301    6781 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:43.520095    6781 out.go:177] * Starting "default-k8s-diff-port-270000" primary control-plane node in "default-k8s-diff-port-270000" cluster
	I1011 15:12:43.523043    6781 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 15:12:43.523061    6781 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 15:12:43.523070    6781 cache.go:56] Caching tarball of preloaded images
	I1011 15:12:43.523147    6781 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:12:43.523160    6781 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 15:12:43.523208    6781 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/default-k8s-diff-port-270000/config.json ...
	I1011 15:12:43.523661    6781 start.go:360] acquireMachinesLock for default-k8s-diff-port-270000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:12:43.523690    6781 start.go:364] duration metric: took 22.959µs to acquireMachinesLock for "default-k8s-diff-port-270000"
	I1011 15:12:43.523700    6781 start.go:96] Skipping create...Using existing machine configuration
	I1011 15:12:43.523704    6781 fix.go:54] fixHost starting: 
	I1011 15:12:43.523819    6781 fix.go:112] recreateIfNeeded on default-k8s-diff-port-270000: state=Stopped err=<nil>
	W1011 15:12:43.523826    6781 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 15:12:43.527146    6781 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-270000" ...
	I1011 15:12:43.534090    6781 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:12:43.534136    6781 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:02:b7:64:22:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/disk.qcow2
	I1011 15:12:43.536320    6781 main.go:141] libmachine: STDOUT: 
	I1011 15:12:43.536370    6781 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:12:43.536396    6781 fix.go:56] duration metric: took 12.691041ms for fixHost
	I1011 15:12:43.536400    6781 start.go:83] releasing machines lock for "default-k8s-diff-port-270000", held for 12.705958ms
	W1011 15:12:43.536406    6781 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:12:43.536444    6781 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:43.536448    6781 start.go:729] Will try again in 5 seconds ...
	I1011 15:12:48.538552    6781 start.go:360] acquireMachinesLock for default-k8s-diff-port-270000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:12:48.538987    6781 start.go:364] duration metric: took 358.375µs to acquireMachinesLock for "default-k8s-diff-port-270000"
	I1011 15:12:48.539119    6781 start.go:96] Skipping create...Using existing machine configuration
	I1011 15:12:48.539139    6781 fix.go:54] fixHost starting: 
	I1011 15:12:48.539805    6781 fix.go:112] recreateIfNeeded on default-k8s-diff-port-270000: state=Stopped err=<nil>
	W1011 15:12:48.539836    6781 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 15:12:48.549375    6781 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-270000" ...
	I1011 15:12:48.553373    6781 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:12:48.553534    6781 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:02:b7:64:22:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/default-k8s-diff-port-270000/disk.qcow2
	I1011 15:12:48.563545    6781 main.go:141] libmachine: STDOUT: 
	I1011 15:12:48.563935    6781 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:12:48.564018    6781 fix.go:56] duration metric: took 24.8795ms for fixHost
	I1011 15:12:48.564037    6781 start.go:83] releasing machines lock for "default-k8s-diff-port-270000", held for 25.025875ms
	W1011 15:12:48.564241    6781 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-270000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-270000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:48.571470    6781 out.go:201] 
	W1011 15:12:48.575422    6781 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:12:48.575442    6781 out.go:270] * 
	* 
	W1011 15:12:48.577336    6781 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:12:48.587361    6781 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-270000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000: exit status 7 (70.830542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-270000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-876000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-876000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.188415333s)

                                                
                                                
-- stdout --
	* [newest-cni-876000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-876000" primary control-plane node in "newest-cni-876000" cluster
	* Restarting existing qemu2 VM for "newest-cni-876000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-876000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:12:44.815930    6796 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:12:44.816116    6796 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:44.816120    6796 out.go:358] Setting ErrFile to fd 2...
	I1011 15:12:44.816122    6796 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:44.816242    6796 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:12:44.817287    6796 out.go:352] Setting JSON to false
	I1011 15:12:44.834900    6796 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6134,"bootTime":1728678630,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 15:12:44.834970    6796 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 15:12:44.838786    6796 out.go:177] * [newest-cni-876000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 15:12:44.846864    6796 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 15:12:44.846918    6796 notify.go:220] Checking for updates...
	I1011 15:12:44.854796    6796 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 15:12:44.857685    6796 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 15:12:44.860790    6796 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 15:12:44.863767    6796 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 15:12:44.865079    6796 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 15:12:44.868088    6796 config.go:182] Loaded profile config "newest-cni-876000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:12:44.868385    6796 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 15:12:44.871785    6796 out.go:177] * Using the qemu2 driver based on existing profile
	I1011 15:12:44.876764    6796 start.go:297] selected driver: qemu2
	I1011 15:12:44.876771    6796 start.go:901] validating driver "qemu2" against &{Name:newest-cni-876000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-876000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:12:44.876870    6796 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 15:12:44.879293    6796 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1011 15:12:44.879317    6796 cni.go:84] Creating CNI manager for ""
	I1011 15:12:44.879336    6796 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 15:12:44.879359    6796 start.go:340] cluster config:
	{Name:newest-cni-876000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-876000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 15:12:44.883626    6796 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 15:12:44.891748    6796 out.go:177] * Starting "newest-cni-876000" primary control-plane node in "newest-cni-876000" cluster
	I1011 15:12:44.894785    6796 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 15:12:44.894804    6796 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 15:12:44.894812    6796 cache.go:56] Caching tarball of preloaded images
	I1011 15:12:44.894885    6796 preload.go:172] Found /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 15:12:44.894891    6796 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1011 15:12:44.894958    6796 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/newest-cni-876000/config.json ...
	I1011 15:12:44.895432    6796 start.go:360] acquireMachinesLock for newest-cni-876000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:12:44.895467    6796 start.go:364] duration metric: took 28.834µs to acquireMachinesLock for "newest-cni-876000"
	I1011 15:12:44.895477    6796 start.go:96] Skipping create...Using existing machine configuration
	I1011 15:12:44.895481    6796 fix.go:54] fixHost starting: 
	I1011 15:12:44.895596    6796 fix.go:112] recreateIfNeeded on newest-cni-876000: state=Stopped err=<nil>
	W1011 15:12:44.895602    6796 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 15:12:44.899768    6796 out.go:177] * Restarting existing qemu2 VM for "newest-cni-876000" ...
	I1011 15:12:44.907610    6796 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:12:44.907645    6796 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:8a:5b:06:c6:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/disk.qcow2
	I1011 15:12:44.909807    6796 main.go:141] libmachine: STDOUT: 
	I1011 15:12:44.909826    6796 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:12:44.909856    6796 fix.go:56] duration metric: took 14.3725ms for fixHost
	I1011 15:12:44.909862    6796 start.go:83] releasing machines lock for "newest-cni-876000", held for 14.390083ms
	W1011 15:12:44.909867    6796 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:12:44.909900    6796 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:44.909904    6796 start.go:729] Will try again in 5 seconds ...
	I1011 15:12:49.912138    6796 start.go:360] acquireMachinesLock for newest-cni-876000: {Name:mkbc919b494dae77e1cf970be5f47c9f64d7d155 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1011 15:12:49.912812    6796 start.go:364] duration metric: took 531.25µs to acquireMachinesLock for "newest-cni-876000"
	I1011 15:12:49.912983    6796 start.go:96] Skipping create...Using existing machine configuration
	I1011 15:12:49.913005    6796 fix.go:54] fixHost starting: 
	I1011 15:12:49.913845    6796 fix.go:112] recreateIfNeeded on newest-cni-876000: state=Stopped err=<nil>
	W1011 15:12:49.913872    6796 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 15:12:49.918393    6796 out.go:177] * Restarting existing qemu2 VM for "newest-cni-876000" ...
	I1011 15:12:49.924287    6796 qemu.go:418] Using hvf for hardware acceleration
	I1011 15:12:49.924610    6796 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:8a:5b:06:c6:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19749-1186/.minikube/machines/newest-cni-876000/disk.qcow2
	I1011 15:12:49.935041    6796 main.go:141] libmachine: STDOUT: 
	I1011 15:12:49.935109    6796 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1011 15:12:49.935204    6796 fix.go:56] duration metric: took 22.2035ms for fixHost
	I1011 15:12:49.935227    6796 start.go:83] releasing machines lock for "newest-cni-876000", held for 22.363042ms
	W1011 15:12:49.935446    6796 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-876000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-876000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1011 15:12:49.943214    6796 out.go:201] 
	W1011 15:12:49.946346    6796 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1011 15:12:49.946371    6796 out.go:270] * 
	* 
	W1011 15:12:49.948910    6796 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 15:12:49.965222    6796 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-876000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-876000 -n newest-cni-876000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-876000 -n newest-cni-876000: exit status 7 (76.069542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-876000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-270000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000: exit status 7 (34.387625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-270000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-270000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-270000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-270000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.388417ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-270000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-270000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000: exit status 7 (32.915625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-270000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-270000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000: exit status 7 (32.773583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-270000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-270000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-270000 --alsologtostderr -v=1: exit status 83 (43.831917ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-270000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-270000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:12:48.876243    6815 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:12:48.876461    6815 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:48.876464    6815 out.go:358] Setting ErrFile to fd 2...
	I1011 15:12:48.876466    6815 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:48.876576    6815 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:12:48.876794    6815 out.go:352] Setting JSON to false
	I1011 15:12:48.876803    6815 mustload.go:65] Loading cluster: default-k8s-diff-port-270000
	I1011 15:12:48.877023    6815 config.go:182] Loaded profile config "default-k8s-diff-port-270000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:12:48.880546    6815 out.go:177] * The control-plane node default-k8s-diff-port-270000 host is not running: state=Stopped
	I1011 15:12:48.884428    6815 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-270000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-270000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000: exit status 7 (33.384917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-270000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000: exit status 7 (32.756542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-270000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-876000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-876000 -n newest-cni-876000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-876000 -n newest-cni-876000: exit status 7 (34.501958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-876000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-876000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-876000 --alsologtostderr -v=1: exit status 83 (46.011666ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-876000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-876000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 15:12:50.164315    6841 out.go:345] Setting OutFile to fd 1 ...
	I1011 15:12:50.164504    6841 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:50.164507    6841 out.go:358] Setting ErrFile to fd 2...
	I1011 15:12:50.164510    6841 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 15:12:50.164653    6841 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 15:12:50.164871    6841 out.go:352] Setting JSON to false
	I1011 15:12:50.164880    6841 mustload.go:65] Loading cluster: newest-cni-876000
	I1011 15:12:50.165098    6841 config.go:182] Loaded profile config "newest-cni-876000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 15:12:50.169784    6841 out.go:177] * The control-plane node newest-cni-876000 host is not running: state=Stopped
	I1011 15:12:50.173735    6841 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-876000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-876000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-876000 -n newest-cni-876000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-876000 -n newest-cni-876000: exit status 7 (34.030542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-876000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-876000 -n newest-cni-876000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-876000 -n newest-cni-876000: exit status 7 (33.696583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-876000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (152/273)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.1/json-events 12.25
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.11
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 197.39
29 TestAddons/serial/Volcano 37.85
31 TestAddons/serial/GCPAuth/Namespaces 0.08
32 TestAddons/serial/GCPAuth/PullSecret 8.43
34 TestAddons/parallel/Registry 14.7
35 TestAddons/parallel/Ingress 16.38
36 TestAddons/parallel/InspektorGadget 11.32
37 TestAddons/parallel/MetricsServer 5.25
39 TestAddons/parallel/CSI 50.16
40 TestAddons/parallel/Headlamp 19.55
41 TestAddons/parallel/CloudSpanner 5.19
42 TestAddons/parallel/LocalPath 41.09
43 TestAddons/parallel/NvidiaDevicePlugin 6.16
44 TestAddons/parallel/Yakd 10.38
46 TestAddons/StoppedEnableDisable 12.44
54 TestHyperKitDriverInstallOrUpdate 10.24
57 TestErrorSpam/setup 34.68
58 TestErrorSpam/start 0.36
59 TestErrorSpam/status 0.26
60 TestErrorSpam/pause 0.74
61 TestErrorSpam/unpause 0.66
62 TestErrorSpam/stop 64.33
65 TestFunctional/serial/CopySyncFile 0
66 TestFunctional/serial/StartWithProxy 81.51
67 TestFunctional/serial/AuditLog 0
68 TestFunctional/serial/SoftStart 36.89
69 TestFunctional/serial/KubeContext 0.03
70 TestFunctional/serial/KubectlGetPods 0.04
73 TestFunctional/serial/CacheCmd/cache/add_remote 3.12
74 TestFunctional/serial/CacheCmd/cache/add_local 1.17
75 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
76 TestFunctional/serial/CacheCmd/cache/list 0.04
77 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
78 TestFunctional/serial/CacheCmd/cache/cache_reload 0.68
79 TestFunctional/serial/CacheCmd/cache/delete 0.08
80 TestFunctional/serial/MinikubeKubectlCmd 1.98
81 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.17
82 TestFunctional/serial/ExtraConfig 39.63
83 TestFunctional/serial/ComponentHealth 0.04
84 TestFunctional/serial/LogsCmd 0.64
85 TestFunctional/serial/LogsFileCmd 0.62
86 TestFunctional/serial/InvalidService 4.06
88 TestFunctional/parallel/ConfigCmd 0.26
89 TestFunctional/parallel/DashboardCmd 10.33
90 TestFunctional/parallel/DryRun 0.24
91 TestFunctional/parallel/InternationalLanguage 0.11
92 TestFunctional/parallel/StatusCmd 0.27
97 TestFunctional/parallel/AddonsCmd 0.11
98 TestFunctional/parallel/PersistentVolumeClaim 24.41
100 TestFunctional/parallel/SSHCmd 0.14
101 TestFunctional/parallel/CpCmd 0.47
103 TestFunctional/parallel/FileSync 0.07
104 TestFunctional/parallel/CertSync 0.43
108 TestFunctional/parallel/NodeLabels 0.07
110 TestFunctional/parallel/NonActiveRuntimeDisabled 0.18
112 TestFunctional/parallel/License 0.35
113 TestFunctional/parallel/Version/short 0.04
114 TestFunctional/parallel/Version/components 0.15
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
119 TestFunctional/parallel/ImageCommands/ImageBuild 1.86
120 TestFunctional/parallel/ImageCommands/Setup 1.75
121 TestFunctional/parallel/DockerEnv/bash 0.35
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
125 TestFunctional/parallel/ServiceCmd/DeployApp 14.1
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.46
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.67
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.18
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.17
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.17
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.25
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.19
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.1
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.11
138 TestFunctional/parallel/ServiceCmd/List 0.13
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
141 TestFunctional/parallel/ServiceCmd/Format 0.1
142 TestFunctional/parallel/ServiceCmd/URL 0.1
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
146 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
147 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.16
150 TestFunctional/parallel/ProfileCmd/profile_list 0.15
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.14
152 TestFunctional/parallel/MountCmd/any-port 5.55
153 TestFunctional/parallel/MountCmd/specific-port 1.02
154 TestFunctional/parallel/MountCmd/VerifyCleanup 2.05
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.01
157 TestFunctional/delete_minikube_cached_images 0.01
167 TestMultiControlPlane/serial/CopyFile 0.03
175 TestImageBuild/serial/Setup 33.34
176 TestImageBuild/serial/NormalBuild 1.57
177 TestImageBuild/serial/BuildWithBuildArg 0.64
178 TestImageBuild/serial/BuildWithDockerIgnore 0.46
179 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.46
184 TestJSONOutput/start/Audit 0
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 6.52
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
211 TestMainNoArgs 0.04
212 TestMinikubeProfile 70.4
258 TestStoppedBinaryUpgrade/Setup 1.16
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
275 TestNoKubernetes/serial/ProfileList 31.34
276 TestNoKubernetes/serial/Stop 2.08
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
287 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
293 TestStartStop/group/old-k8s-version/serial/Stop 4.05
294 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
306 TestStartStop/group/no-preload/serial/Stop 3.27
309 TestStartStop/group/embed-certs/serial/Stop 3.84
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
312 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.56
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
331 TestStartStop/group/newest-cni/serial/Stop 1.88
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1011 13:57:58.450971    1707 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1011 13:57:58.451486    1707 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-503000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-503000: exit status 85 (97.005916ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-503000 | jenkins | v1.34.0 | 11 Oct 24 13:57 PDT |          |
	|         | -p download-only-503000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 13:57:30
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 13:57:30.726860    1708 out.go:345] Setting OutFile to fd 1 ...
	I1011 13:57:30.727027    1708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 13:57:30.727031    1708 out.go:358] Setting ErrFile to fd 2...
	I1011 13:57:30.727033    1708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 13:57:30.727155    1708 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	W1011 13:57:30.727239    1708 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19749-1186/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19749-1186/.minikube/config/config.json: no such file or directory
	I1011 13:57:30.728673    1708 out.go:352] Setting JSON to true
	I1011 13:57:30.748198    1708 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1620,"bootTime":1728678630,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 13:57:30.748264    1708 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 13:57:30.756623    1708 out.go:97] [download-only-503000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 13:57:30.756800    1708 notify.go:220] Checking for updates...
	W1011 13:57:30.756818    1708 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball: no such file or directory
	I1011 13:57:30.760551    1708 out.go:169] MINIKUBE_LOCATION=19749
	I1011 13:57:30.766632    1708 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 13:57:30.771551    1708 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 13:57:30.775576    1708 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 13:57:30.778627    1708 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	W1011 13:57:30.784573    1708 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1011 13:57:30.784803    1708 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 13:57:30.788633    1708 out.go:97] Using the qemu2 driver based on user configuration
	I1011 13:57:30.788654    1708 start.go:297] selected driver: qemu2
	I1011 13:57:30.788671    1708 start.go:901] validating driver "qemu2" against <nil>
	I1011 13:57:30.788744    1708 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 13:57:30.792527    1708 out.go:169] Automatically selected the socket_vmnet network
	I1011 13:57:30.798552    1708 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1011 13:57:30.798633    1708 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1011 13:57:30.798673    1708 cni.go:84] Creating CNI manager for ""
	I1011 13:57:30.798723    1708 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1011 13:57:30.798784    1708 start.go:340] cluster config:
	{Name:download-only-503000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-503000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 13:57:30.803711    1708 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 13:57:30.807577    1708 out.go:97] Downloading VM boot image ...
	I1011 13:57:30.807592    1708 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso
	I1011 13:57:42.575051    1708 out.go:97] Starting "download-only-503000" primary control-plane node in "download-only-503000" cluster
	I1011 13:57:42.575084    1708 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1011 13:57:42.633317    1708 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1011 13:57:42.633349    1708 cache.go:56] Caching tarball of preloaded images
	I1011 13:57:42.633574    1708 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1011 13:57:42.638627    1708 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1011 13:57:42.638634    1708 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1011 13:57:42.719809    1708 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1011 13:57:57.215485    1708 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1011 13:57:57.215661    1708 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1011 13:57:57.911110    1708 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1011 13:57:57.911305    1708 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/download-only-503000/config.json ...
	I1011 13:57:57.911324    1708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/download-only-503000/config.json: {Name:mkd2b98657911ccd623de976d2d8a0b17645864c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 13:57:57.911598    1708 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1011 13:57:57.911846    1708 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1011 13:57:58.402547    1708 out.go:193] 
	W1011 13:57:58.407645    1708 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19749-1186/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10a309060 0x10a309060 0x10a309060 0x10a309060 0x10a309060 0x10a309060 0x10a309060] Decompressors:map[bz2:0x14000915990 gz:0x14000915998 tar:0x14000915940 tar.bz2:0x14000915950 tar.gz:0x14000915960 tar.xz:0x14000915970 tar.zst:0x14000915980 tbz2:0x14000915950 tgz:0x14000915960 txz:0x14000915970 tzst:0x14000915980 xz:0x140009159a0 zip:0x140009159b0 zst:0x140009159a8] Getters:map[file:0x140016f86f0 http:0x140006901e0 https:0x14000690230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1011 13:57:58.407666    1708 out_reason.go:110] 
	W1011 13:57:58.415481    1708 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 13:57:58.418553    1708 out.go:193] 
	
	
	* The control-plane node download-only-503000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-503000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-503000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (12.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-318000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-318000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (12.250496209s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (12.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1011 13:58:11.075038    1707 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1011 13:58:11.075090    1707 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-318000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-318000: exit status 85 (83.8655ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-503000 | jenkins | v1.34.0 | 11 Oct 24 13:57 PDT |                     |
	|         | -p download-only-503000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 11 Oct 24 13:57 PDT | 11 Oct 24 13:57 PDT |
	| delete  | -p download-only-503000        | download-only-503000 | jenkins | v1.34.0 | 11 Oct 24 13:57 PDT | 11 Oct 24 13:57 PDT |
	| start   | -o=json --download-only        | download-only-318000 | jenkins | v1.34.0 | 11 Oct 24 13:57 PDT |                     |
	|         | -p download-only-318000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 13:57:58
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 13:57:58.855397    1735 out.go:345] Setting OutFile to fd 1 ...
	I1011 13:57:58.855567    1735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 13:57:58.855570    1735 out.go:358] Setting ErrFile to fd 2...
	I1011 13:57:58.855572    1735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 13:57:58.855683    1735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 13:57:58.856925    1735 out.go:352] Setting JSON to true
	I1011 13:57:58.874410    1735 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1648,"bootTime":1728678630,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 13:57:58.874475    1735 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 13:57:58.879535    1735 out.go:97] [download-only-318000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 13:57:58.879611    1735 notify.go:220] Checking for updates...
	I1011 13:57:58.883531    1735 out.go:169] MINIKUBE_LOCATION=19749
	I1011 13:57:58.886566    1735 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 13:57:58.890411    1735 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 13:57:58.893557    1735 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 13:57:58.896542    1735 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	W1011 13:57:58.902498    1735 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1011 13:57:58.902670    1735 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 13:57:58.905492    1735 out.go:97] Using the qemu2 driver based on user configuration
	I1011 13:57:58.905501    1735 start.go:297] selected driver: qemu2
	I1011 13:57:58.905505    1735 start.go:901] validating driver "qemu2" against <nil>
	I1011 13:57:58.905554    1735 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 13:57:58.908541    1735 out.go:169] Automatically selected the socket_vmnet network
	I1011 13:57:58.913835    1735 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1011 13:57:58.913937    1735 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1011 13:57:58.913955    1735 cni.go:84] Creating CNI manager for ""
	I1011 13:57:58.913981    1735 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 13:57:58.913991    1735 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 13:57:58.914033    1735 start.go:340] cluster config:
	{Name:download-only-318000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-318000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 13:57:58.918369    1735 iso.go:125] acquiring lock: {Name:mk370eac292c548d907728d926e63c373a8b261c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 13:57:58.921591    1735 out.go:97] Starting "download-only-318000" primary control-plane node in "download-only-318000" cluster
	I1011 13:57:58.921601    1735 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 13:57:58.978672    1735 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1011 13:57:58.978696    1735 cache.go:56] Caching tarball of preloaded images
	I1011 13:57:58.978905    1735 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1011 13:57:58.982243    1735 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1011 13:57:58.982250    1735 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I1011 13:57:59.066286    1735 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19749-1186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-318000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-318000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-318000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:935: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-392000
addons_test.go:935: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-392000: exit status 85 (71.419375ms)

                                                
                                                
-- stdout --
	* Profile "addons-392000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-392000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:946: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-392000
addons_test.go:946: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-392000: exit status 85 (74.857792ms)

                                                
                                                
-- stdout --
	* Profile "addons-392000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-392000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (197.39s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-392000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-darwin-arm64 start -p addons-392000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m17.393096291s)
--- PASS: TestAddons/Setup (197.39s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.85s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:819: volcano-controller stabilized in 8.820834ms
addons_test.go:811: volcano-admission stabilized in 8.856084ms
addons_test.go:803: volcano-scheduler stabilized in 8.86525ms
addons_test.go:825: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-7pdpf" [a0e82fe8-e289-44dc-8a82-3776c4be58e3] Running
addons_test.go:825: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.006496458s
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-8twmw" [0b5ba100-2682-4e12-b18b-98eb081e7979] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00485975s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-q2np8" [dead73a6-5e93-4a9a-9135-a3cbcfdbc2bf] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004397959s
addons_test.go:838: (dbg) Run:  kubectl --context addons-392000 delete -n volcano-system job volcano-admission-init
addons_test.go:844: (dbg) Run:  kubectl --context addons-392000 create -f testdata/vcjob.yaml
addons_test.go:852: (dbg) Run:  kubectl --context addons-392000 get vcjob -n my-volcano
addons_test.go:870: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [3c66fa6c-94ec-4eb1-879e-0c450efc994a] Pending
helpers_test.go:344: "test-job-nginx-0" [3c66fa6c-94ec-4eb1-879e-0c450efc994a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [3c66fa6c-94ec-4eb1-879e-0c450efc994a] Running
addons_test.go:870: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.005953917s
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-392000 addons disable volcano --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-darwin-arm64 -p addons-392000 addons disable volcano --alsologtostderr -v=1: (10.612853584s)
--- PASS: TestAddons/serial/Volcano (37.85s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-392000 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-392000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/PullSecret (8.43s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-392000 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-392000 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3d815fd4-b5a6-494b-b72e-0675cf2aa840] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3d815fd4-b5a6-494b-b72e-0675cf2aa840] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: integration-test=busybox healthy within 8.011691375s
addons_test.go:633: (dbg) Run:  kubectl --context addons-392000 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-392000 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-392000 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-392000 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/PullSecret (8.43s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 1.275875ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-d2jp6" [61b0d3d4-f040-41de-8b08-b8f57d0cb92a] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003877333s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6rblb" [54fc1d01-977f-4261-9cf4-990661c7ff91] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007517375s
addons_test.go:331: (dbg) Run:  kubectl --context addons-392000 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-392000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-392000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.351249375s)
addons_test.go:350: (dbg) Run:  out/minikube-darwin-arm64 -p addons-392000 ip
2024/10/11 14:02:38 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-392000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.70s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (16.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-392000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-392000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-392000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8ea32fb6-f7f4-4cf4-a329-dee49b8eeeae] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8ea32fb6-f7f4-4cf4-a329-dee49b8eeeae] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.013383958s
I1011 14:03:49.925289    1707 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-darwin-arm64 -p addons-392000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-392000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-arm64 -p addons-392000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-392000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-392000 addons disable ingress --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-darwin-arm64 -p addons-392000 addons disable ingress --alsologtostderr -v=1: (7.254027417s)
--- PASS: TestAddons/parallel/Ingress (16.38s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.32s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-h28rg" [df0bff7b-69d3-4963-852f-8b61d95ac1ca] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.011781625s
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-392000 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-darwin-arm64 -p addons-392000 addons disable inspektor-gadget --alsologtostderr -v=1: (5.309719125s)
--- PASS: TestAddons/parallel/InspektorGadget (11.32s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 1.503458ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-9rkj4" [760fc841-fb1d-4963-bdab-38300b687a76] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004814625s
addons_test.go:402: (dbg) Run:  kubectl --context addons-392000 top pods -n kube-system
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-392000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1011 14:03:00.238406    1707 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1011 14:03:00.240820    1707 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1011 14:03:00.240826    1707 kapi.go:107] duration metric: took 2.464083ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 2.467167ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-392000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-392000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [81653422-c688-44e5-98f5-35af3a26700c] Pending
helpers_test.go:344: "task-pv-pod" [81653422-c688-44e5-98f5-35af3a26700c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [81653422-c688-44e5-98f5-35af3a26700c] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.008663s
addons_test.go:511: (dbg) Run:  kubectl --context addons-392000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-392000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-392000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-392000 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-392000 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-392000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-392000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4208a99c-0f88-4966-a5b1-c09c728a60d8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4208a99c-0f88-4966-a5b1-c09c728a60d8] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004567125s
addons_test.go:553: (dbg) Run:  kubectl --context addons-392000 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-392000 delete pod task-pv-pod-restore: (1.083071417s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-392000 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-392000 delete volumesnapshot new-snapshot-demo
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-392000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-392000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-darwin-arm64 -p addons-392000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.124204375s)
--- PASS: TestAddons/parallel/CSI (50.16s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-392000 --alsologtostderr -v=1
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-k97b2" [fbfa3636-8743-4e84-a9a2-bcf9e72b1d76] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-k97b2" [fbfa3636-8743-4e84-a9a2-bcf9e72b1d76] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004672333s
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-392000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-darwin-arm64 -p addons-392000 addons disable headlamp --alsologtostderr -v=1: (5.234255083s)
--- PASS: TestAddons/parallel/Headlamp (19.55s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-c88g4" [5e714caf-f8d0-40de-b010-ca1eda9df9db] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003616958s
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-392000 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.19s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (41.09s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:884: (dbg) Run:  kubectl --context addons-392000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:890: (dbg) Run:  kubectl --context addons-392000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-392000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8c4e17ca-25e1-4680-ac63-8d4e36c727d6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8c4e17ca-25e1-4680-ac63-8d4e36c727d6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8c4e17ca-25e1-4680-ac63-8d4e36c727d6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.006855875s
addons_test.go:902: (dbg) Run:  kubectl --context addons-392000 get pvc test-pvc -o=json
addons_test.go:911: (dbg) Run:  out/minikube-darwin-arm64 -p addons-392000 ssh "cat /opt/local-path-provisioner/pvc-6b834c2b-856d-4b6f-be5b-5fdbfb55ed8d_default_test-pvc/file1"
addons_test.go:923: (dbg) Run:  kubectl --context addons-392000 delete pod test-local-path
addons_test.go:927: (dbg) Run:  kubectl --context addons-392000 delete pvc test-pvc
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-392000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-darwin-arm64 -p addons-392000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.588135334s)
--- PASS: TestAddons/parallel/LocalPath (41.09s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gn27b" [1093be52-4019-497a-91cf-4c63abeeeefd] Running
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003748042s
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-392000 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:982: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-khvj5" [34b09b2b-105f-4e50-a41f-678bcea46752] Running
addons_test.go:982: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003965958s
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-392000 addons disable yakd --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-darwin-arm64 -p addons-392000 addons disable yakd --alsologtostderr -v=1: (5.373938417s)
--- PASS: TestAddons/parallel/Yakd (10.38s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-392000
addons_test.go:170: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-392000: (12.238281125s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-392000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-392000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-392000
--- PASS: TestAddons/StoppedEnableDisable (12.44s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.24s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1011 14:58:08.269088    1707 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1011 14:58:08.269292    1707 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W1011 14:58:10.282295    1707 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1011 14:58:10.282521    1707 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1011 14:58:10.282566    1707 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3592822065/001/docker-machine-driver-hyperkit
I1011 14:58:10.801926    1707 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3592822065/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1055de400 0x1055de400 0x1055de400 0x1055de400 0x1055de400 0x1055de400 0x1055de400] Decompressors:map[bz2:0x1400081ae20 gz:0x1400081ae28 tar:0x1400081add0 tar.bz2:0x1400081ade0 tar.gz:0x1400081adf0 tar.xz:0x1400081ae00 tar.zst:0x1400081ae10 tbz2:0x1400081ade0 tgz:0x1400081adf0 txz:0x1400081ae00 tzst:0x1400081ae10 xz:0x1400081ae30 zip:0x1400081ae40 zst:0x1400081ae38] Getters:map[file:0x1400080a520 http:0x1400006f130 https:0x1400006f220] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1011 14:58:10.802049    1707 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3592822065/001/docker-machine-driver-hyperkit
I1011 14:58:13.583804    1707 install.go:79] stdout: 
W1011 14:58:13.584007    1707 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3592822065/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3592822065/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1011 14:58:13.584044    1707 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3592822065/001/docker-machine-driver-hyperkit]
I1011 14:58:13.605054    1707 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3592822065/001/docker-machine-driver-hyperkit]
--- PASS: TestHyperKitDriverInstallOrUpdate (10.24s)

                                                
                                    
x
+
TestErrorSpam/setup (34.68s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-969000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-969000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 --driver=qemu2 : (34.682750459s)
--- PASS: TestErrorSpam/setup (34.68s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-969000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-969000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-969000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.26s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-969000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-969000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-969000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 status
--- PASS: TestErrorSpam/status (0.26s)

                                                
                                    
x
+
TestErrorSpam/pause (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-969000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-969000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-969000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 pause
--- PASS: TestErrorSpam/pause (0.74s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-969000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-969000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-969000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 unpause
--- PASS: TestErrorSpam/unpause (0.66s)

                                                
                                    
x
+
TestErrorSpam/stop (64.33s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-969000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-969000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 stop: (12.21032925s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-969000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-969000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 stop: (26.060136541s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-969000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-969000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-969000 stop: (26.060359208s)
--- PASS: TestErrorSpam/stop (64.33s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19749-1186/.minikube/files/etc/test/nested/copy/1707/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.51s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-044000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E1011 14:06:29.356000    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:06:29.363621    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:06:29.377353    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:06:29.400955    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:06:29.444307    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:06:29.527727    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:06:29.691143    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:06:30.014570    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:06:30.658053    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:06:31.941577    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:06:34.505120    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:06:39.629027    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:06:49.873056    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
E1011 14:07:10.356794    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-044000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m21.512249458s)
--- PASS: TestFunctional/serial/StartWithProxy (81.51s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.89s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1011 14:07:13.287664    1707 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-044000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-044000 --alsologtostderr -v=8: (36.884917416s)
functional_test.go:663: soft start took 36.885354666s for "functional-044000" cluster.
I1011 14:07:50.172650    1707 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (36.89s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-044000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 cache add registry.k8s.io/pause:3.1
E1011 14:07:51.320206    1707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19749-1186/.minikube/profiles/addons-392000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-044000 cache add registry.k8s.io/pause:3.1: (1.192475084s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-044000 cache add registry.k8s.io/pause:3.3: (1.066540958s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-044000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local4146080756/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 cache add minikube-local-cache-test:functional-044000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 cache delete minikube-local-cache-test:functional-044000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-044000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (76.12525ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 kubectl -- --context functional-044000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-arm64 -p functional-044000 kubectl -- --context functional-044000 get pods: (1.976235833s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.98s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-044000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-044000 get pods: (1.170819542s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.63s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-044000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-044000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.628368875s)
functional_test.go:761: restart took 39.628482583s for "functional-044000" cluster.
I1011 14:08:38.227248    1707 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (39.63s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-044000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd4094990755/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.62s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.06s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-044000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-044000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-044000: exit status 115 (148.644667ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31098 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-044000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 config get cpus: exit status 14 (40.10175ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 config get cpus: exit status 14 (34.283375ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-044000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-044000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2969: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.33s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-044000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-044000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (124.156166ms)

                                                
                                                
-- stdout --
	* [functional-044000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:09:33.578197    2952 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:09:33.578362    2952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:09:33.578366    2952 out.go:358] Setting ErrFile to fd 2...
	I1011 14:09:33.578368    2952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:09:33.578492    2952 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:09:33.579785    2952 out.go:352] Setting JSON to false
	I1011 14:09:33.598842    2952 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2343,"bootTime":1728678630,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 14:09:33.598929    2952 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 14:09:33.602782    2952 out.go:177] * [functional-044000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1011 14:09:33.610722    2952 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 14:09:33.610794    2952 notify.go:220] Checking for updates...
	I1011 14:09:33.618632    2952 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 14:09:33.622624    2952 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 14:09:33.625707    2952 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 14:09:33.628664    2952 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 14:09:33.631705    2952 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 14:09:33.634861    2952 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:09:33.635173    2952 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 14:09:33.639695    2952 out.go:177] * Using the qemu2 driver based on existing profile
	I1011 14:09:33.646596    2952 start.go:297] selected driver: qemu2
	I1011 14:09:33.646601    2952 start.go:901] validating driver "qemu2" against &{Name:functional-044000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-044000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 14:09:33.646656    2952 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 14:09:33.655641    2952 out.go:201] 
	W1011 14:09:33.658780    2952 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1011 14:09:33.665608    2952 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-044000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-044000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-044000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (113.073084ms)

                                                
                                                
-- stdout --
	* [functional-044000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 14:09:33.817638    2963 out.go:345] Setting OutFile to fd 1 ...
	I1011 14:09:33.817782    2963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:09:33.817786    2963 out.go:358] Setting ErrFile to fd 2...
	I1011 14:09:33.817795    2963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 14:09:33.817937    2963 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
	I1011 14:09:33.819424    2963 out.go:352] Setting JSON to false
	I1011 14:09:33.838096    2963 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2343,"bootTime":1728678630,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1011 14:09:33.838175    2963 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1011 14:09:33.841693    2963 out.go:177] * [functional-044000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1011 14:09:33.848732    2963 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 14:09:33.848784    2963 notify.go:220] Checking for updates...
	I1011 14:09:33.854681    2963 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	I1011 14:09:33.857719    2963 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1011 14:09:33.859055    2963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 14:09:33.861681    2963 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	I1011 14:09:33.864671    2963 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 14:09:33.868015    2963 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1011 14:09:33.868257    2963 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 14:09:33.872619    2963 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1011 14:09:33.879692    2963 start.go:297] selected driver: qemu2
	I1011 14:09:33.879698    2963 start.go:901] validating driver "qemu2" against &{Name:functional-044000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-044000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 14:09:33.879763    2963 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 14:09:33.886680    2963 out.go:201] 
	W1011 14:09:33.890721    2963 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1011 14:09:33.894699    2963 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ea843ddc-ffcd-4282-a8c5-8165338f3f28] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004903833s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-044000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-044000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-044000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-044000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fbb29b30-fd00-4d34-bbee-2382f36b0e62] Pending
helpers_test.go:344: "sp-pod" [fbb29b30-fd00-4d34-bbee-2382f36b0e62] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fbb29b30-fd00-4d34-bbee-2382f36b0e62] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.006690667s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-044000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-044000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-044000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2d64ce04-da25-4bae-8e9d-bea6ef437cf9] Pending
helpers_test.go:344: "sp-pod" [2d64ce04-da25-4bae-8e9d-bea6ef437cf9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2d64ce04-da25-4bae-8e9d-bea6ef437cf9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.009650875s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-044000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.41s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh -n functional-044000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 cp functional-044000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd648453841/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh -n functional-044000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh -n functional-044000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1707/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo cat /etc/test/nested/copy/1707/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1707.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo cat /etc/ssl/certs/1707.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1707.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo cat /usr/share/ca-certificates/1707.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/17072.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo cat /etc/ssl/certs/17072.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/17072.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo cat /usr/share/ca-certificates/17072.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-044000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 ssh "sudo systemctl is-active crio": exit status 1 (181.723042ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-044000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-044000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:functional-044000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-044000 image ls --format short --alsologtostderr:
I1011 14:09:37.500496    2991 out.go:345] Setting OutFile to fd 1 ...
I1011 14:09:37.500701    2991 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 14:09:37.500704    2991 out.go:358] Setting ErrFile to fd 2...
I1011 14:09:37.500707    2991 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 14:09:37.500834    2991 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
I1011 14:09:37.501302    2991 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1011 14:09:37.501364    2991 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1011 14:09:37.502278    2991 ssh_runner.go:195] Run: systemctl --version
I1011 14:09:37.502288    2991 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/functional-044000/id_rsa Username:docker}
I1011 14:09:37.530401    2991 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-044000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| docker.io/kicbase/echo-server               | functional-044000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | latest            | 048e090385966 | 197MB  |
| docker.io/library/nginx                     | alpine            | 577a23b5858b9 | 50.8MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/minikube-local-cache-test | functional-044000 | 8037488107703 | 30B    |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| localhost/my-image                          | functional-044000 | 307045c34b0ed | 1.41MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-044000 image ls --format table --alsologtostderr:
I1011 14:09:39.604427    3004 out.go:345] Setting OutFile to fd 1 ...
I1011 14:09:39.604640    3004 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 14:09:39.604646    3004 out.go:358] Setting ErrFile to fd 2...
I1011 14:09:39.604648    3004 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 14:09:39.604784    3004 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
I1011 14:09:39.605330    3004 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1011 14:09:39.605393    3004 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1011 14:09:39.606369    3004 ssh_runner.go:195] Run: systemctl --version
I1011 14:09:39.606378    3004 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/functional-044000/id_rsa Username:docker}
I1011 14:09:39.634830    3004 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/10/11 14:09:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-044000 image ls --format json --alsologtostderr:
[{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-044000"],"size":"4780000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"afb61768ce381961ca0beff95337601f
29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"307045c34b0ed97696c48ba66a012b425431411d6d5e84582e6b85415ad57a27","repoDigests":[],"repoTags":["localhost/my-image:functional-044000"],"size":"1410000"},{"id":"80374881077035e3e7d5eb3e8f5e81f4452437f9ddc83bd32b495fa8e61aa264","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-044000"],"size":"30"},{"id":"577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":
[],"repoTags":["docker.io/library/nginx:alpine"],"size":"50800000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"197000000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}
]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-044000 image ls --format json --alsologtostderr:
I1011 14:09:39.517509    3002 out.go:345] Setting OutFile to fd 1 ...
I1011 14:09:39.517737    3002 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 14:09:39.517740    3002 out.go:358] Setting ErrFile to fd 2...
I1011 14:09:39.517743    3002 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 14:09:39.517902    3002 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
I1011 14:09:39.518353    3002 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1011 14:09:39.518416    3002 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1011 14:09:39.519297    3002 ssh_runner.go:195] Run: systemctl --version
I1011 14:09:39.519307    3002 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/functional-044000/id_rsa Username:docker}
I1011 14:09:39.548883    3002 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-044000 image ls --format yaml --alsologtostderr:
- id: 048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "197000000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-044000
size: "4780000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 80374881077035e3e7d5eb3e8f5e81f4452437f9ddc83bd32b495fa8e61aa264
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-044000
size: "30"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "50800000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-044000 image ls --format yaml --alsologtostderr:
I1011 14:09:37.576593    2993 out.go:345] Setting OutFile to fd 1 ...
I1011 14:09:37.576783    2993 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 14:09:37.576786    2993 out.go:358] Setting ErrFile to fd 2...
I1011 14:09:37.576788    2993 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 14:09:37.576933    2993 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
I1011 14:09:37.577393    2993 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1011 14:09:37.577458    2993 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1011 14:09:37.578290    2993 ssh_runner.go:195] Run: systemctl --version
I1011 14:09:37.578303    2993 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/functional-044000/id_rsa Username:docker}
I1011 14:09:37.606103    2993 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 ssh pgrep buildkitd: exit status 1 (65.731333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image build -t localhost/my-image:functional-044000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-044000 image build -t localhost/my-image:functional-044000 testdata/build --alsologtostderr: (1.715546667s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-044000 image build -t localhost/my-image:functional-044000 testdata/build --alsologtostderr:
I1011 14:09:37.719497    2997 out.go:345] Setting OutFile to fd 1 ...
I1011 14:09:37.719760    2997 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 14:09:37.719763    2997 out.go:358] Setting ErrFile to fd 2...
I1011 14:09:37.719765    2997 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 14:09:37.719882    2997 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19749-1186/.minikube/bin
I1011 14:09:37.720302    2997 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1011 14:09:37.721107    2997 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1011 14:09:37.721941    2997 ssh_runner.go:195] Run: systemctl --version
I1011 14:09:37.721951    2997 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19749-1186/.minikube/machines/functional-044000/id_rsa Username:docker}
I1011 14:09:37.750521    2997 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2183633905.tar
I1011 14:09:37.750597    2997 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1011 14:09:37.754792    2997 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2183633905.tar
I1011 14:09:37.756215    2997 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2183633905.tar: stat -c "%s %y" /var/lib/minikube/build/build.2183633905.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2183633905.tar': No such file or directory
I1011 14:09:37.756229    2997 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2183633905.tar --> /var/lib/minikube/build/build.2183633905.tar (3072 bytes)
I1011 14:09:37.764394    2997 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2183633905
I1011 14:09:37.768143    2997 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2183633905 -xf /var/lib/minikube/build/build.2183633905.tar
I1011 14:09:37.771456    2997 docker.go:360] Building image: /var/lib/minikube/build/build.2183633905
I1011 14:09:37.771516    2997 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-044000 /var/lib/minikube/build/build.2183633905
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:307045c34b0ed97696c48ba66a012b425431411d6d5e84582e6b85415ad57a27 done
#8 naming to localhost/my-image:functional-044000 done
#8 DONE 0.0s
I1011 14:09:39.368356    2997 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-044000 /var/lib/minikube/build/build.2183633905: (1.596832417s)
I1011 14:09:39.368436    2997 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2183633905
I1011 14:09:39.372532    2997 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2183633905.tar
I1011 14:09:39.376078    2997 build_images.go:217] Built localhost/my-image:functional-044000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2183633905.tar
I1011 14:09:39.376098    2997 build_images.go:133] succeeded building to: functional-044000
I1011 14:09:39.376101    2997 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.733706375s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-044000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-044000 docker-env) && out/minikube-darwin-arm64 status -p functional-044000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-044000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (14.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-044000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-044000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-9st5k" [dbd99461-c7a2-4e17-91ed-f289c0dc3ce3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-9st5k" [dbd99461-c7a2-4e17-91ed-f289c0dc3ce3] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 14.01091875s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (14.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image load --daemon kicbase/echo-server:functional-044000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image load --daemon kicbase/echo-server:functional-044000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-044000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image load --daemon kicbase/echo-server:functional-044000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image save kicbase/echo-server:functional-044000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image rm kicbase/echo-server:functional-044000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-044000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 image save --daemon kicbase/echo-server:functional-044000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-044000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-044000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-044000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-044000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-044000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2803: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-044000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-044000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [1ae20dfe-2217-47fa-9343-41bb54587be2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [1ae20dfe-2217-47fa-9343-41bb54587be2] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.008407666s
I1011 14:09:02.416141    1707 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 service list -o json
functional_test.go:1494: Took "90.779417ms" to run "out/minikube-darwin-arm64 -p functional-044000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:30915
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:30915
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-044000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.22.237 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1011 14:09:02.513285    1707 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1011 14:09:02.557242    1707 config.go:182] Loaded profile config "functional-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-044000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "104.790834ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "40.669417ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "104.923166ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "39.596167ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port206435514/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728680964928578000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port206435514/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728680964928578000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port206435514/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728680964928578000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port206435514/001/test-1728680964928578000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (66.605125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1011 14:09:24.995763    1707 retry.go:31] will retry after 727.748383ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 11 21:09 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 11 21:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 11 21:09 test-1728680964928578000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh cat /mount-9p/test-1728680964928578000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-044000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [54c85afc-2112-4356-b6ce-d6174164fb78] Pending
helpers_test.go:344: "busybox-mount" [54c85afc-2112-4356-b6ce-d6174164fb78] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [54c85afc-2112-4356-b6ce-d6174164fb78] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [54c85afc-2112-4356-b6ce-d6174164fb78] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.011091208s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-044000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port206435514/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port295655619/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (68.699833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1011 14:09:30.549397    1707 retry.go:31] will retry after 463.59103ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port295655619/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 ssh "sudo umount -f /mount-9p": exit status 1 (67.242ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-044000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port295655619/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup592046441/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup592046441/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup592046441/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T" /mount1: exit status 1 (87.239417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1011 14:09:31.594733    1707 retry.go:31] will retry after 575.308183ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T" /mount1: exit status 1 (80.371541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1011 14:09:32.252643    1707 retry.go:31] will retry after 1.020729575s: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-044000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-044000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup592046441/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup592046441/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-044000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup592046441/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.05s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-044000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-044000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-044000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-737000 status --output json -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/CopyFile (0.03s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (33.34s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-111000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-111000 --driver=qemu2 : (33.340038625s)
--- PASS: TestImageBuild/serial/Setup (33.34s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.57s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-111000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-111000: (1.573797042s)
--- PASS: TestImageBuild/serial/NormalBuild (1.57s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.64s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-111000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.64s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-111000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.46s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-111000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.52s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-239000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-239000 --output=json --user=testUser: (6.517977084s)
--- PASS: TestJSONOutput/stop/Command (6.52s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-136000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-136000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (98.71675ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d85293b1-2a7e-4fdc-bb02-7f393f417474","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-136000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d3e7c31b-b111-45b3-97f3-bd63eebdc81c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19749"}}
	{"specversion":"1.0","id":"bcdae737-3114-44ad-b02d-80ed06beee37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig"}}
	{"specversion":"1.0","id":"c939045d-9192-4bd2-96b3-9a8eade9a62c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"6eb46685-e85d-49ba-a201-9483c29ed899","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0489adeb-c548-4bbc-903b-cf06c715d6d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube"}}
	{"specversion":"1.0","id":"585a429b-4f6f-423c-be35-a878abbad3e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"843a4945-7e6a-4b73-a3c4-695ac3d5b4aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-136000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-136000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (70.4s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-908000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-908000 --driver=qemu2 : (34.726527833s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-909000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-909000 --driver=qemu2 : (34.954559666s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-908000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-909000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-909000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-909000
helpers_test.go:175: Cleaning up "first-908000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-908000
--- PASS: TestMinikubeProfile (70.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-796000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-796000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.799208ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-796000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19749-1186/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19749-1186/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-796000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-796000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.624083ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-796000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-796000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.603568333s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.736926166s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-796000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-796000: (2.078911917s)
--- PASS: TestNoKubernetes/serial/Stop (2.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-796000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-796000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.723208ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-796000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-796000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-583000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-627000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-627000 --alsologtostderr -v=3: (4.052144709s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-627000 -n old-k8s-version-627000: exit status 7 (58.784875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-627000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-785000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-785000 --alsologtostderr -v=3: (3.271554125s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-616000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-616000 --alsologtostderr -v=3: (3.842345125s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-785000 -n no-preload-785000: exit status 7 (60.4675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-785000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-616000 -n embed-certs-616000: exit status 7 (61.411834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-616000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-270000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-270000 --alsologtostderr -v=3: (3.564613125s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-876000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-876000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-876000 --alsologtostderr -v=3: (1.883811333s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-270000 -n default-k8s-diff-port-270000: exit status 7 (58.025416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-270000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-876000 -n newest-cni-876000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-876000 -n newest-cni-876000: exit status 7 (59.742542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-876000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/273)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:968: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-204000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                
----------------------- debugLogs end: cilium-204000 [took: 2.33341175s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-204000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-204000
--- SKIP: TestNetworkPlugins/group/cilium (2.45s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-579000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-579000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard