Test Report: QEMU_macOS 19681

                    
                      58481425fd156c33d9cb9581f1bb301aacf19547:2024-09-25:36370
                    
                

Test fail (99/273)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.41
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.27
22 TestOffline 10.03
33 TestAddons/parallel/Registry 71.34
45 TestCertOptions 10.21
46 TestCertExpiration 195.4
47 TestDockerFlags 10.39
48 TestForceSystemdFlag 10.35
49 TestForceSystemdEnv 11.04
94 TestFunctional/parallel/ServiceCmdConnect 34.78
166 TestMultiControlPlane/serial/StopSecondaryNode 162.3
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 150.13
168 TestMultiControlPlane/serial/RestartSecondaryNode 185.34
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 332.59
171 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
173 TestMultiControlPlane/serial/StopCluster 300.23
174 TestMultiControlPlane/serial/RestartCluster 5.25
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
176 TestMultiControlPlane/serial/AddSecondaryNode 0.07
180 TestImageBuild/serial/Setup 10.05
183 TestJSONOutput/start/Command 9.8
189 TestJSONOutput/pause/Command 0.08
195 TestJSONOutput/unpause/Command 0.05
212 TestMinikubeProfile 10.1
215 TestMountStart/serial/StartWithMountFirst 10.03
218 TestMultiNode/serial/FreshStart2Nodes 9.94
219 TestMultiNode/serial/DeployApp2Nodes 114.9
220 TestMultiNode/serial/PingHostFrom2Pods 0.09
221 TestMultiNode/serial/AddNode 0.08
222 TestMultiNode/serial/MultiNodeLabels 0.06
223 TestMultiNode/serial/ProfileList 0.08
224 TestMultiNode/serial/CopyFile 0.06
225 TestMultiNode/serial/StopNode 0.14
226 TestMultiNode/serial/StartAfterStop 51.91
227 TestMultiNode/serial/RestartKeepsNodes 8.99
228 TestMultiNode/serial/DeleteNode 0.1
229 TestMultiNode/serial/StopMultiNode 3.44
230 TestMultiNode/serial/RestartMultiNode 5.26
231 TestMultiNode/serial/ValidateNameConflict 20.1
235 TestPreload 10.24
237 TestScheduledStopUnix 10.17
238 TestSkaffold 13.05
241 TestRunningBinaryUpgrade 606.29
243 TestKubernetesUpgrade 18.26
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.35
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.03
259 TestStoppedBinaryUpgrade/Upgrade 575.58
261 TestPause/serial/Start 9.93
271 TestNoKubernetes/serial/StartWithK8s 9.95
272 TestNoKubernetes/serial/StartWithStopK8s 5.26
273 TestNoKubernetes/serial/Start 5.31
277 TestNoKubernetes/serial/StartNoArgs 5.31
279 TestNetworkPlugins/group/auto/Start 9.84
280 TestNetworkPlugins/group/flannel/Start 9.79
281 TestNetworkPlugins/group/kindnet/Start 9.8
282 TestNetworkPlugins/group/enable-default-cni/Start 9.92
283 TestNetworkPlugins/group/bridge/Start 9.95
284 TestNetworkPlugins/group/kubenet/Start 9.94
285 TestNetworkPlugins/group/custom-flannel/Start 9.89
286 TestNetworkPlugins/group/calico/Start 9.8
287 TestNetworkPlugins/group/false/Start 9.82
290 TestStartStop/group/old-k8s-version/serial/FirstStart 9.84
291 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
292 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
295 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
296 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
297 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
298 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
299 TestStartStop/group/old-k8s-version/serial/Pause 0.1
301 TestStartStop/group/no-preload/serial/FirstStart 9.88
303 TestStartStop/group/embed-certs/serial/FirstStart 9.91
304 TestStartStop/group/no-preload/serial/DeployApp 0.1
305 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.14
307 TestStartStop/group/embed-certs/serial/DeployApp 0.09
308 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
311 TestStartStop/group/no-preload/serial/SecondStart 5.27
313 TestStartStop/group/embed-certs/serial/SecondStart 5.32
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
317 TestStartStop/group/no-preload/serial/Pause 0.1
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.95
320 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
321 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
322 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
323 TestStartStop/group/embed-certs/serial/Pause 0.11
325 TestStartStop/group/newest-cni/serial/FirstStart 10.1
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
333 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.25
335 TestStartStop/group/newest-cni/serial/SecondStart 5.25
336 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
337 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
338 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
339 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
343 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (25.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-539000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-539000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (25.406857792s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d79439b6-4847-46bd-9853-052c0cdc0dab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-539000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b83a4713-8f46-4e09-b523-93d5cb656868","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19681"}}
	{"specversion":"1.0","id":"c4815067-1e72-423b-802f-c0be3a1da269","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig"}}
	{"specversion":"1.0","id":"1e7af936-9225-498b-bb2d-6a678598d3cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"111cc0fa-a009-4339-9edb-3004203615f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8e6bc42d-3267-47e2-8bd5-22736f12a160","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube"}}
	{"specversion":"1.0","id":"d2672303-e061-4c21-982d-6aa21a0bb479","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"18e7720a-f96f-46b6-a842-65146119305f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7ac1eee6-a710-4fb4-9a37-f36ef6760af2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"6ea02abe-4990-4cbe-be74-47dfbc3b6668","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"70da7405-0518-4c0e-be6d-60fb620e02f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-539000\" primary control-plane node in \"download-only-539000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8ea27ce7-931a-4f5b-b7bd-f9845925c118","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9c899d5c-b2a0-4af1-99cf-55e8854ebbe6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108aa56c0 0x108aa56c0 0x108aa56c0 0x108aa56c0 0x108aa56c0 0x108aa56c0 0x108aa56c0] Decompressors:map[bz2:0x14000121dd0 gz:0x14000121dd8 tar:0x14000121d00 tar.bz2:0x14000121d20 tar.gz:0x14000121d30 tar.xz:0x14000121da0 tar.zst:0x14000121db0 tbz2:0x14000121d20 tgz:0x14
000121d30 txz:0x14000121da0 tzst:0x14000121db0 xz:0x14000121e00 zip:0x14000121e10 zst:0x14000121e08] Getters:map[file:0x14000812bc0 http:0x1400017ceb0 https:0x1400017cf00] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"4d50b862-f707-4ad9-872d-02d1b20d0211","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 11:28:44.079751    1935 out.go:345] Setting OutFile to fd 1 ...
	I0925 11:28:44.080179    1935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 11:28:44.080184    1935 out.go:358] Setting ErrFile to fd 2...
	I0925 11:28:44.080187    1935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 11:28:44.080339    1935 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	W0925 11:28:44.080439    1935 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19681-1412/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19681-1412/.minikube/config/config.json: no such file or directory
	I0925 11:28:44.081580    1935 out.go:352] Setting JSON to true
	I0925 11:28:44.098840    1935 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1695,"bootTime":1727287229,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 11:28:44.098915    1935 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 11:28:44.105105    1935 out.go:97] [download-only-539000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 11:28:44.105289    1935 notify.go:220] Checking for updates...
	W0925 11:28:44.105351    1935 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball: no such file or directory
	I0925 11:28:44.109045    1935 out.go:169] MINIKUBE_LOCATION=19681
	I0925 11:28:44.112115    1935 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 11:28:44.117102    1935 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 11:28:44.120085    1935 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 11:28:44.123065    1935 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	W0925 11:28:44.129019    1935 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0925 11:28:44.129231    1935 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 11:28:44.133071    1935 out.go:97] Using the qemu2 driver based on user configuration
	I0925 11:28:44.133091    1935 start.go:297] selected driver: qemu2
	I0925 11:28:44.133105    1935 start.go:901] validating driver "qemu2" against <nil>
	I0925 11:28:44.133186    1935 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 11:28:44.137071    1935 out.go:169] Automatically selected the socket_vmnet network
	I0925 11:28:44.142802    1935 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0925 11:28:44.142890    1935 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0925 11:28:44.142953    1935 cni.go:84] Creating CNI manager for ""
	I0925 11:28:44.142986    1935 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 11:28:44.143033    1935 start.go:340] cluster config:
	{Name:download-only-539000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 11:28:44.148358    1935 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:28:44.153129    1935 out.go:97] Downloading VM boot image ...
	I0925 11:28:44.153159    1935 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I0925 11:28:57.903553    1935 out.go:97] Starting "download-only-539000" primary control-plane node in "download-only-539000" cluster
	I0925 11:28:57.903579    1935 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0925 11:28:57.958896    1935 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0925 11:28:57.958906    1935 cache.go:56] Caching tarball of preloaded images
	I0925 11:28:57.959145    1935 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0925 11:28:57.965275    1935 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0925 11:28:57.965282    1935 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0925 11:28:58.069296    1935 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0925 11:29:08.133186    1935 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0925 11:29:08.133362    1935 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0925 11:29:08.828236    1935 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0925 11:29:08.828433    1935 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/download-only-539000/config.json ...
	I0925 11:29:08.828450    1935 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/download-only-539000/config.json: {Name:mk750938212cabbaa9b599ff882d97e51fcdd3d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:29:08.828689    1935 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0925 11:29:08.828884    1935 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0925 11:29:09.413158    1935 out.go:193] 
	W0925 11:29:09.418036    1935 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108aa56c0 0x108aa56c0 0x108aa56c0 0x108aa56c0 0x108aa56c0 0x108aa56c0 0x108aa56c0] Decompressors:map[bz2:0x14000121dd0 gz:0x14000121dd8 tar:0x14000121d00 tar.bz2:0x14000121d20 tar.gz:0x14000121d30 tar.xz:0x14000121da0 tar.zst:0x14000121db0 tbz2:0x14000121d20 tgz:0x14000121d30 txz:0x14000121da0 tzst:0x14000121db0 xz:0x14000121e00 zip:0x14000121e10 zst:0x14000121e08] Getters:map[file:0x14000812bc0 http:0x1400017ceb0 https:0x1400017cf00] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0925 11:29:09.418067    1935 out_reason.go:110] 
	W0925 11:29:09.425135    1935 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 11:29:09.428962    1935 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-539000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (25.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.27s)

                                                
                                                
=== RUN   TestBinaryMirror
I0925 11:29:19.994744    1934 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-148000 --alsologtostderr --binary-mirror http://127.0.0.1:49311 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-148000 --alsologtostderr --binary-mirror http://127.0.0.1:49311 --driver=qemu2 : exit status 40 (170.3605ms)

                                                
                                                
-- stdout --
	* [binary-mirror-148000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-148000" primary control-plane node in "binary-mirror-148000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 11:29:20.053400    2001 out.go:345] Setting OutFile to fd 1 ...
	I0925 11:29:20.053525    2001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 11:29:20.053528    2001 out.go:358] Setting ErrFile to fd 2...
	I0925 11:29:20.053531    2001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 11:29:20.053667    2001 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 11:29:20.054803    2001 out.go:352] Setting JSON to false
	I0925 11:29:20.070782    2001 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1731,"bootTime":1727287229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 11:29:20.070859    2001 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 11:29:20.074108    2001 out.go:177] * [binary-mirror-148000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 11:29:20.081994    2001 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 11:29:20.082066    2001 notify.go:220] Checking for updates...
	I0925 11:29:20.092639    2001 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 11:29:20.096106    2001 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 11:29:20.099150    2001 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 11:29:20.102128    2001 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 11:29:20.103655    2001 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 11:29:20.108072    2001 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 11:29:20.115031    2001 start.go:297] selected driver: qemu2
	I0925 11:29:20.115038    2001 start.go:901] validating driver "qemu2" against <nil>
	I0925 11:29:20.115101    2001 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 11:29:20.118124    2001 out.go:177] * Automatically selected the socket_vmnet network
	I0925 11:29:20.123400    2001 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0925 11:29:20.123507    2001 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0925 11:29:20.123532    2001 cni.go:84] Creating CNI manager for ""
	I0925 11:29:20.123560    2001 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 11:29:20.123577    2001 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 11:29:20.123622    2001 start.go:340] cluster config:
	{Name:binary-mirror-148000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-148000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:49311 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 11:29:20.127310    2001 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:29:20.136132    2001 out.go:177] * Starting "binary-mirror-148000" primary control-plane node in "binary-mirror-148000" cluster
	I0925 11:29:20.140180    2001 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 11:29:20.140198    2001 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 11:29:20.140209    2001 cache.go:56] Caching tarball of preloaded images
	I0925 11:29:20.140295    2001 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 11:29:20.140301    2001 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 11:29:20.140497    2001 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/binary-mirror-148000/config.json ...
	I0925 11:29:20.140508    2001 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/binary-mirror-148000/config.json: {Name:mk5cc0368bf4b2a6748a02bd30774230e79f7824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:29:20.140862    2001 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 11:29:20.140916    2001 download.go:107] Downloading: http://127.0.0.1:49311/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49311/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I0925 11:29:20.171195    2001 out.go:201] 
	W0925 11:29:20.175167    2001 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49311/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49311/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49311/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49311/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104da56c0 0x104da56c0 0x104da56c0 0x104da56c0 0x104da56c0 0x104da56c0 0x104da56c0] Decompressors:map[bz2:0x140003bc6b0 gz:0x140003bc6b8 tar:0x140003bc5f0 tar.bz2:0x140003bc630 tar.gz:0x140003bc640 tar.xz:0x140003bc650 tar.zst:0x140003bc6a0 tbz2:0x140003bc630 tgz:0x140003bc640 txz:0x140003bc650 tzst:0x140003bc6a0 xz:0x140003bc6c0 zip:0x140003bc6e0 zst:0x140003bc6c8] Getters:map[file:0x140005ab250 http:0x140008170e0 https:0x14000817130] Dir:
false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49311/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49311/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49311/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49311/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104da56c0 0x104da56c0 0x104da56c0 0x104da56c0 0x104da56c0 0x104da56c0 0x104da56c0] Decompressors:map[bz2:0x140003bc6b0 gz:0x140003bc6b8 tar:0x140003bc5f0 tar.bz2:0x140003bc630 tar.gz:0x140003bc640 tar.xz:0x140003bc650 tar.zst:0x140003bc6a0 tbz2:0x140003bc630 tgz:0x140003bc640 txz:0x140003bc650 tzst:0x140003bc6a0 xz:0x140003bc6c0 zip:0x140003bc6e0 zst:0x140003bc6c8] Getters:map[file:0x140005ab250 http:0x140008170e0 https:0x14000817130] Dir:false ProgressListener:<nil> Insecure:fals
e DisableSymlinks:false Options:[]}: unexpected EOF
	W0925 11:29:20.175173    2001 out.go:270] * 
	* 
	W0925 11:29:20.175612    2001 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 11:29:20.187153    2001 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-148000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:49311" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-148000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-148000
--- FAIL: TestBinaryMirror (0.27s)

                                                
                                    
x
+
TestOffline (10.03s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-587000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-587000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.882575792s)

                                                
                                                
-- stdout --
	* [offline-docker-587000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-587000" primary control-plane node in "offline-docker-587000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-587000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:17:30.621498    4607 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:17:30.621637    4607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:17:30.621640    4607 out.go:358] Setting ErrFile to fd 2...
	I0925 12:17:30.621643    4607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:17:30.621765    4607 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:17:30.622918    4607 out.go:352] Setting JSON to false
	I0925 12:17:30.640326    4607 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4621,"bootTime":1727287229,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:17:30.640408    4607 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:17:30.646754    4607 out.go:177] * [offline-docker-587000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:17:30.653624    4607 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:17:30.653651    4607 notify.go:220] Checking for updates...
	I0925 12:17:30.662519    4607 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:17:30.665573    4607 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:17:30.668581    4607 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:17:30.671598    4607 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:17:30.674545    4607 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:17:30.677947    4607 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:17:30.678016    4607 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:17:30.681525    4607 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:17:30.688636    4607 start.go:297] selected driver: qemu2
	I0925 12:17:30.688647    4607 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:17:30.688654    4607 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:17:30.690680    4607 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:17:30.693542    4607 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:17:30.696640    4607 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:17:30.696656    4607 cni.go:84] Creating CNI manager for ""
	I0925 12:17:30.696677    4607 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:17:30.696684    4607 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 12:17:30.696723    4607 start.go:340] cluster config:
	{Name:offline-docker-587000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:17:30.700667    4607 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:17:30.706537    4607 out.go:177] * Starting "offline-docker-587000" primary control-plane node in "offline-docker-587000" cluster
	I0925 12:17:30.710543    4607 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:17:30.710572    4607 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:17:30.710580    4607 cache.go:56] Caching tarball of preloaded images
	I0925 12:17:30.710654    4607 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:17:30.710660    4607 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:17:30.710727    4607 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/offline-docker-587000/config.json ...
	I0925 12:17:30.710737    4607 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/offline-docker-587000/config.json: {Name:mkf6d5843d48ef0aedc62ae745f0f721f453a905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:17:30.711015    4607 start.go:360] acquireMachinesLock for offline-docker-587000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:17:30.711046    4607 start.go:364] duration metric: took 25.084µs to acquireMachinesLock for "offline-docker-587000"
	I0925 12:17:30.711057    4607 start.go:93] Provisioning new machine with config: &{Name:offline-docker-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:17:30.711085    4607 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:17:30.715531    4607 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0925 12:17:30.731331    4607 start.go:159] libmachine.API.Create for "offline-docker-587000" (driver="qemu2")
	I0925 12:17:30.731361    4607 client.go:168] LocalClient.Create starting
	I0925 12:17:30.731431    4607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:17:30.731465    4607 main.go:141] libmachine: Decoding PEM data...
	I0925 12:17:30.731475    4607 main.go:141] libmachine: Parsing certificate...
	I0925 12:17:30.731519    4607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:17:30.731542    4607 main.go:141] libmachine: Decoding PEM data...
	I0925 12:17:30.731549    4607 main.go:141] libmachine: Parsing certificate...
	I0925 12:17:30.731957    4607 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:17:30.892353    4607 main.go:141] libmachine: Creating SSH key...
	I0925 12:17:31.041698    4607 main.go:141] libmachine: Creating Disk image...
	I0925 12:17:31.041718    4607 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:17:31.041918    4607 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/offline-docker-587000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/offline-docker-587000/disk.qcow2
	I0925 12:17:31.051621    4607 main.go:141] libmachine: STDOUT: 
	I0925 12:17:31.051642    4607 main.go:141] libmachine: STDERR: 
	I0925 12:17:31.051720    4607 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/offline-docker-587000/disk.qcow2 +20000M
	I0925 12:17:31.060441    4607 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:17:31.060458    4607 main.go:141] libmachine: STDERR: 
	I0925 12:17:31.060481    4607 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/offline-docker-587000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/offline-docker-587000/disk.qcow2
	I0925 12:17:31.060487    4607 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:17:31.060501    4607 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:17:31.060535    4607 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/offline-docker-587000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/offline-docker-587000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/offline-docker-587000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:bb:a2:c8:a1:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/offline-docker-587000/disk.qcow2
	I0925 12:17:31.062408    4607 main.go:141] libmachine: STDOUT: 
	I0925 12:17:31.062421    4607 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:17:31.062439    4607 client.go:171] duration metric: took 331.076542ms to LocalClient.Create
	I0925 12:17:33.064477    4607 start.go:128] duration metric: took 2.353428333s to createHost
	I0925 12:17:33.064495    4607 start.go:83] releasing machines lock for "offline-docker-587000", held for 2.353488583s
	W0925 12:17:33.064508    4607 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:17:33.070540    4607 out.go:177] * Deleting "offline-docker-587000" in qemu2 ...
	W0925 12:17:33.083971    4607 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:17:33.083983    4607 start.go:729] Will try again in 5 seconds ...
	I0925 12:17:38.085678    4607 start.go:360] acquireMachinesLock for offline-docker-587000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:17:38.086130    4607 start.go:364] duration metric: took 346.833µs to acquireMachinesLock for "offline-docker-587000"
	I0925 12:17:38.086264    4607 start.go:93] Provisioning new machine with config: &{Name:offline-docker-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:17:38.086596    4607 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:17:38.098181    4607 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0925 12:17:38.149280    4607 start.go:159] libmachine.API.Create for "offline-docker-587000" (driver="qemu2")
	I0925 12:17:38.149327    4607 client.go:168] LocalClient.Create starting
	I0925 12:17:38.149447    4607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:17:38.149518    4607 main.go:141] libmachine: Decoding PEM data...
	I0925 12:17:38.149546    4607 main.go:141] libmachine: Parsing certificate...
	I0925 12:17:38.149619    4607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:17:38.149668    4607 main.go:141] libmachine: Decoding PEM data...
	I0925 12:17:38.149684    4607 main.go:141] libmachine: Parsing certificate...
	I0925 12:17:38.150906    4607 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:17:38.353673    4607 main.go:141] libmachine: Creating SSH key...
	I0925 12:17:38.405484    4607 main.go:141] libmachine: Creating Disk image...
	I0925 12:17:38.405490    4607 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:17:38.405671    4607 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/offline-docker-587000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/offline-docker-587000/disk.qcow2
	I0925 12:17:38.414604    4607 main.go:141] libmachine: STDOUT: 
	I0925 12:17:38.414623    4607 main.go:141] libmachine: STDERR: 
	I0925 12:17:38.414687    4607 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/offline-docker-587000/disk.qcow2 +20000M
	I0925 12:17:38.422329    4607 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:17:38.422342    4607 main.go:141] libmachine: STDERR: 
	I0925 12:17:38.422353    4607 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/offline-docker-587000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/offline-docker-587000/disk.qcow2
	I0925 12:17:38.422357    4607 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:17:38.422366    4607 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:17:38.422392    4607 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/offline-docker-587000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/offline-docker-587000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/offline-docker-587000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:12:66:d4:90:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/offline-docker-587000/disk.qcow2
	I0925 12:17:38.423818    4607 main.go:141] libmachine: STDOUT: 
	I0925 12:17:38.423831    4607 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:17:38.423843    4607 client.go:171] duration metric: took 274.516333ms to LocalClient.Create
	I0925 12:17:40.425982    4607 start.go:128] duration metric: took 2.339400917s to createHost
	I0925 12:17:40.426103    4607 start.go:83] releasing machines lock for "offline-docker-587000", held for 2.339990709s
	W0925 12:17:40.426482    4607 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-587000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-587000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:17:40.444285    4607 out.go:201] 
	W0925 12:17:40.448366    4607 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:17:40.448399    4607 out.go:270] * 
	* 
	W0925 12:17:40.450030    4607 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:17:40.463161    4607 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-587000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-25 12:17:40.475238 -0700 PDT m=+2936.564822376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-587000 -n offline-docker-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-587000 -n offline-docker-587000: exit status 7 (61.2095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-587000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-587000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-587000
--- FAIL: TestOffline (10.03s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.478375ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-j9gg5" [cd05e219-d06b-4852-a6f2-4a3231bd632b] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003613625s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6zqqk" [0581ec26-3502-4a43-9102-5a41bcd2e80c] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010431958s
addons_test.go:338: (dbg) Run:  kubectl --context addons-587000 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-587000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-587000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.066093709s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-587000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-darwin-arm64 -p addons-587000 ip
2024/09/25 11:42:29 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-darwin-arm64 -p addons-587000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-587000 -n addons-587000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-587000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-539000 | jenkins | v1.34.0 | 25 Sep 24 11:28 PDT |                     |
	|         | -p download-only-539000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 25 Sep 24 11:29 PDT | 25 Sep 24 11:29 PDT |
	| delete  | -p download-only-539000                                                                     | download-only-539000 | jenkins | v1.34.0 | 25 Sep 24 11:29 PDT | 25 Sep 24 11:29 PDT |
	| start   | -o=json --download-only                                                                     | download-only-953000 | jenkins | v1.34.0 | 25 Sep 24 11:29 PDT |                     |
	|         | -p download-only-953000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 25 Sep 24 11:29 PDT | 25 Sep 24 11:29 PDT |
	| delete  | -p download-only-953000                                                                     | download-only-953000 | jenkins | v1.34.0 | 25 Sep 24 11:29 PDT | 25 Sep 24 11:29 PDT |
	| delete  | -p download-only-539000                                                                     | download-only-539000 | jenkins | v1.34.0 | 25 Sep 24 11:29 PDT | 25 Sep 24 11:29 PDT |
	| delete  | -p download-only-953000                                                                     | download-only-953000 | jenkins | v1.34.0 | 25 Sep 24 11:29 PDT | 25 Sep 24 11:29 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-148000 | jenkins | v1.34.0 | 25 Sep 24 11:29 PDT |                     |
	|         | binary-mirror-148000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49311                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-148000                                                                     | binary-mirror-148000 | jenkins | v1.34.0 | 25 Sep 24 11:29 PDT | 25 Sep 24 11:29 PDT |
	| addons  | disable dashboard -p                                                                        | addons-587000        | jenkins | v1.34.0 | 25 Sep 24 11:29 PDT |                     |
	|         | addons-587000                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-587000        | jenkins | v1.34.0 | 25 Sep 24 11:29 PDT |                     |
	|         | addons-587000                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-587000 --wait=true                                                                | addons-587000        | jenkins | v1.34.0 | 25 Sep 24 11:29 PDT | 25 Sep 24 11:32 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | addons-587000 addons disable                                                                | addons-587000        | jenkins | v1.34.0 | 25 Sep 24 11:33 PDT | 25 Sep 24 11:33 PDT |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-587000        | jenkins | v1.34.0 | 25 Sep 24 11:41 PDT | 25 Sep 24 11:41 PDT |
	|         | -p addons-587000                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-587000 addons disable                                                                | addons-587000        | jenkins | v1.34.0 | 25 Sep 24 11:41 PDT | 25 Sep 24 11:41 PDT |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-587000 addons disable                                                                | addons-587000        | jenkins | v1.34.0 | 25 Sep 24 11:41 PDT | 25 Sep 24 11:41 PDT |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-587000        | jenkins | v1.34.0 | 25 Sep 24 11:41 PDT | 25 Sep 24 11:41 PDT |
	|         | -p addons-587000                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-587000 ssh cat                                                                       | addons-587000        | jenkins | v1.34.0 | 25 Sep 24 11:42 PDT | 25 Sep 24 11:42 PDT |
	|         | /opt/local-path-provisioner/pvc-d6f86e1e-adfd-42c5-97b3-7dd574cb793e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-587000 addons disable                                                                | addons-587000        | jenkins | v1.34.0 | 25 Sep 24 11:42 PDT |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-587000 ip                                                                            | addons-587000        | jenkins | v1.34.0 | 25 Sep 24 11:42 PDT | 25 Sep 24 11:42 PDT |
	| addons  | addons-587000 addons disable                                                                | addons-587000        | jenkins | v1.34.0 | 25 Sep 24 11:42 PDT | 25 Sep 24 11:42 PDT |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/25 11:29:20
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 11:29:20.352352    2015 out.go:345] Setting OutFile to fd 1 ...
	I0925 11:29:20.352456    2015 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 11:29:20.352459    2015 out.go:358] Setting ErrFile to fd 2...
	I0925 11:29:20.352461    2015 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 11:29:20.352572    2015 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 11:29:20.353660    2015 out.go:352] Setting JSON to false
	I0925 11:29:20.369588    2015 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1731,"bootTime":1727287229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 11:29:20.369651    2015 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 11:29:20.374181    2015 out.go:177] * [addons-587000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 11:29:20.381225    2015 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 11:29:20.381272    2015 notify.go:220] Checking for updates...
	I0925 11:29:20.388143    2015 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 11:29:20.391124    2015 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 11:29:20.394125    2015 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 11:29:20.397267    2015 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 11:29:20.400179    2015 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 11:29:20.403311    2015 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 11:29:20.407119    2015 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 11:29:20.413111    2015 start.go:297] selected driver: qemu2
	I0925 11:29:20.413117    2015 start.go:901] validating driver "qemu2" against <nil>
	I0925 11:29:20.413128    2015 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 11:29:20.415180    2015 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 11:29:20.418087    2015 out.go:177] * Automatically selected the socket_vmnet network
	I0925 11:29:20.421206    2015 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 11:29:20.421223    2015 cni.go:84] Creating CNI manager for ""
	I0925 11:29:20.421246    2015 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 11:29:20.421255    2015 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 11:29:20.421289    2015 start.go:340] cluster config:
	{Name:addons-587000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 11:29:20.424743    2015 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:29:20.433114    2015 out.go:177] * Starting "addons-587000" primary control-plane node in "addons-587000" cluster
	I0925 11:29:20.437115    2015 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 11:29:20.437140    2015 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 11:29:20.437148    2015 cache.go:56] Caching tarball of preloaded images
	I0925 11:29:20.437222    2015 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 11:29:20.437227    2015 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 11:29:20.437431    2015 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/config.json ...
	I0925 11:29:20.437444    2015 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/config.json: {Name:mkc056094593b9036ada2ebdb55655e39db50302 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:29:20.437800    2015 start.go:360] acquireMachinesLock for addons-587000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 11:29:20.437859    2015 start.go:364] duration metric: took 53.459µs to acquireMachinesLock for "addons-587000"
	I0925 11:29:20.437871    2015 start.go:93] Provisioning new machine with config: &{Name:addons-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 11:29:20.437900    2015 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 11:29:20.446136    2015 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0925 11:29:21.126692    2015 start.go:159] libmachine.API.Create for "addons-587000" (driver="qemu2")
	I0925 11:29:21.126742    2015 client.go:168] LocalClient.Create starting
	I0925 11:29:21.126927    2015 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 11:29:21.226697    2015 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 11:29:21.332960    2015 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 11:29:22.283432    2015 main.go:141] libmachine: Creating SSH key...
	I0925 11:29:22.405161    2015 main.go:141] libmachine: Creating Disk image...
	I0925 11:29:22.405167    2015 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 11:29:22.405393    2015 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/disk.qcow2
	I0925 11:29:22.424578    2015 main.go:141] libmachine: STDOUT: 
	I0925 11:29:22.424601    2015 main.go:141] libmachine: STDERR: 
	I0925 11:29:22.424678    2015 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/disk.qcow2 +20000M
	I0925 11:29:22.432755    2015 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 11:29:22.432770    2015 main.go:141] libmachine: STDERR: 
	I0925 11:29:22.432801    2015 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/disk.qcow2
	I0925 11:29:22.432807    2015 main.go:141] libmachine: Starting QEMU VM...
	I0925 11:29:22.432847    2015 qemu.go:418] Using hvf for hardware acceleration
	I0925 11:29:22.432874    2015 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:e0:88:f5:d8:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/disk.qcow2
	I0925 11:29:22.491630    2015 main.go:141] libmachine: STDOUT: 
	I0925 11:29:22.491661    2015 main.go:141] libmachine: STDERR: 
	I0925 11:29:22.491665    2015 main.go:141] libmachine: Attempt 0
	I0925 11:29:22.491694    2015 main.go:141] libmachine: Searching for 2:e0:88:f5:d8:63 in /var/db/dhcpd_leases ...
	I0925 11:29:22.491750    2015 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0925 11:29:22.491768    2015 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66f5a14b}
	I0925 11:29:24.493901    2015 main.go:141] libmachine: Attempt 1
	I0925 11:29:24.493991    2015 main.go:141] libmachine: Searching for 2:e0:88:f5:d8:63 in /var/db/dhcpd_leases ...
	I0925 11:29:24.494364    2015 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0925 11:29:24.494417    2015 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66f5a14b}
	I0925 11:29:26.496680    2015 main.go:141] libmachine: Attempt 2
	I0925 11:29:26.496881    2015 main.go:141] libmachine: Searching for 2:e0:88:f5:d8:63 in /var/db/dhcpd_leases ...
	I0925 11:29:26.497175    2015 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0925 11:29:26.497227    2015 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66f5a14b}
	I0925 11:29:28.499381    2015 main.go:141] libmachine: Attempt 3
	I0925 11:29:28.499411    2015 main.go:141] libmachine: Searching for 2:e0:88:f5:d8:63 in /var/db/dhcpd_leases ...
	I0925 11:29:28.499491    2015 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0925 11:29:28.499509    2015 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66f5a14b}
	I0925 11:29:30.501550    2015 main.go:141] libmachine: Attempt 4
	I0925 11:29:30.501563    2015 main.go:141] libmachine: Searching for 2:e0:88:f5:d8:63 in /var/db/dhcpd_leases ...
	I0925 11:29:30.501595    2015 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0925 11:29:30.501602    2015 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66f5a14b}
	I0925 11:29:32.503629    2015 main.go:141] libmachine: Attempt 5
	I0925 11:29:32.503642    2015 main.go:141] libmachine: Searching for 2:e0:88:f5:d8:63 in /var/db/dhcpd_leases ...
	I0925 11:29:32.503674    2015 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0925 11:29:32.503696    2015 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66f5a14b}
	I0925 11:29:34.505737    2015 main.go:141] libmachine: Attempt 6
	I0925 11:29:34.505758    2015 main.go:141] libmachine: Searching for 2:e0:88:f5:d8:63 in /var/db/dhcpd_leases ...
	I0925 11:29:34.505846    2015 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0925 11:29:34.505876    2015 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66f5a14b}
	I0925 11:29:36.507973    2015 main.go:141] libmachine: Attempt 7
	I0925 11:29:36.508009    2015 main.go:141] libmachine: Searching for 2:e0:88:f5:d8:63 in /var/db/dhcpd_leases ...
	I0925 11:29:36.508074    2015 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0925 11:29:36.508087    2015 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:2:e0:88:f5:d8:63 ID:1,2:e0:88:f5:d8:63 Lease:0x66f5a80f}
	I0925 11:29:36.508090    2015 main.go:141] libmachine: Found match: 2:e0:88:f5:d8:63
	I0925 11:29:36.508096    2015 main.go:141] libmachine: IP: 192.168.105.2
	I0925 11:29:36.508101    2015 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0925 11:29:38.529594    2015 machine.go:93] provisionDockerMachine start ...
	I0925 11:29:38.531222    2015 main.go:141] libmachine: Using SSH client type: native
	I0925 11:29:38.531707    2015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102afdc00] 0x102b00440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 11:29:38.531725    2015 main.go:141] libmachine: About to run SSH command:
	hostname
	I0925 11:29:38.605307    2015 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0925 11:29:38.605336    2015 buildroot.go:166] provisioning hostname "addons-587000"
	I0925 11:29:38.605469    2015 main.go:141] libmachine: Using SSH client type: native
	I0925 11:29:38.605702    2015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102afdc00] 0x102b00440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 11:29:38.605714    2015 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-587000 && echo "addons-587000" | sudo tee /etc/hostname
	I0925 11:29:38.670810    2015 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-587000
	
	I0925 11:29:38.670909    2015 main.go:141] libmachine: Using SSH client type: native
	I0925 11:29:38.671095    2015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102afdc00] 0x102b00440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 11:29:38.671106    2015 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-587000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-587000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-587000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 11:29:38.726968    2015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 11:29:38.726982    2015 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19681-1412/.minikube CaCertPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19681-1412/.minikube}
	I0925 11:29:38.726990    2015 buildroot.go:174] setting up certificates
	I0925 11:29:38.726995    2015 provision.go:84] configureAuth start
	I0925 11:29:38.727002    2015 provision.go:143] copyHostCerts
	I0925 11:29:38.727078    2015 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.pem (1082 bytes)
	I0925 11:29:38.727345    2015 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19681-1412/.minikube/cert.pem (1123 bytes)
	I0925 11:29:38.727480    2015 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19681-1412/.minikube/key.pem (1675 bytes)
	I0925 11:29:38.727580    2015 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca-key.pem org=jenkins.addons-587000 san=[127.0.0.1 192.168.105.2 addons-587000 localhost minikube]
	I0925 11:29:38.940801    2015 provision.go:177] copyRemoteCerts
	I0925 11:29:38.940884    2015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 11:29:38.940896    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:29:38.969528    2015 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0925 11:29:38.977892    2015 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0925 11:29:38.985996    2015 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0925 11:29:38.993992    2015 provision.go:87] duration metric: took 266.971417ms to configureAuth
	I0925 11:29:38.994003    2015 buildroot.go:189] setting minikube options for container-runtime
	I0925 11:29:38.994116    2015 config.go:182] Loaded profile config "addons-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 11:29:38.994163    2015 main.go:141] libmachine: Using SSH client type: native
	I0925 11:29:38.994252    2015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102afdc00] 0x102b00440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 11:29:38.994257    2015 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 11:29:39.043073    2015 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 11:29:39.043082    2015 buildroot.go:70] root file system type: tmpfs
	I0925 11:29:39.043132    2015 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 11:29:39.043185    2015 main.go:141] libmachine: Using SSH client type: native
	I0925 11:29:39.043279    2015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102afdc00] 0x102b00440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 11:29:39.043311    2015 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 11:29:39.099264    2015 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 11:29:39.099337    2015 main.go:141] libmachine: Using SSH client type: native
	I0925 11:29:39.099454    2015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102afdc00] 0x102b00440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 11:29:39.099463    2015 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 11:29:40.455908    2015 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0925 11:29:40.455924    2015 machine.go:96] duration metric: took 1.9263245s to provisionDockerMachine
	I0925 11:29:40.455932    2015 client.go:171] duration metric: took 19.329422667s to LocalClient.Create
	I0925 11:29:40.455942    2015 start.go:167] duration metric: took 19.32949675s to libmachine.API.Create "addons-587000"
	I0925 11:29:40.455948    2015 start.go:293] postStartSetup for "addons-587000" (driver="qemu2")
	I0925 11:29:40.455954    2015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 11:29:40.456030    2015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 11:29:40.456041    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:29:40.485853    2015 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 11:29:40.487278    2015 info.go:137] Remote host: Buildroot 2023.02.9
	I0925 11:29:40.487285    2015 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19681-1412/.minikube/addons for local assets ...
	I0925 11:29:40.487378    2015 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19681-1412/.minikube/files for local assets ...
	I0925 11:29:40.487409    2015 start.go:296] duration metric: took 31.4585ms for postStartSetup
	I0925 11:29:40.487821    2015 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/config.json ...
	I0925 11:29:40.488018    2015 start.go:128] duration metric: took 20.050360667s to createHost
	I0925 11:29:40.488047    2015 main.go:141] libmachine: Using SSH client type: native
	I0925 11:29:40.488133    2015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102afdc00] 0x102b00440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 11:29:40.488138    2015 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0925 11:29:40.538634    2015 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727288980.098431211
	
	I0925 11:29:40.538641    2015 fix.go:216] guest clock: 1727288980.098431211
	I0925 11:29:40.538647    2015 fix.go:229] Guest: 2024-09-25 11:29:40.098431211 -0700 PDT Remote: 2024-09-25 11:29:40.48802 -0700 PDT m=+20.154629418 (delta=-389.588789ms)
	I0925 11:29:40.538659    2015 fix.go:200] guest clock delta is within tolerance: -389.588789ms
	I0925 11:29:40.538662    2015 start.go:83] releasing machines lock for "addons-587000", held for 20.101045375s
	I0925 11:29:40.538950    2015 ssh_runner.go:195] Run: cat /version.json
	I0925 11:29:40.538957    2015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 11:29:40.538959    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:29:40.538995    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:29:40.567023    2015 ssh_runner.go:195] Run: systemctl --version
	I0925 11:29:40.611981    2015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 11:29:40.614172    2015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 11:29:40.614209    2015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 11:29:40.620560    2015 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 11:29:40.620568    2015 start.go:495] detecting cgroup driver to use...
	I0925 11:29:40.620710    2015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 11:29:40.627484    2015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0925 11:29:40.631113    2015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 11:29:40.634427    2015 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 11:29:40.634453    2015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 11:29:40.638106    2015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 11:29:40.641833    2015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 11:29:40.645498    2015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 11:29:40.649259    2015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 11:29:40.653287    2015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 11:29:40.657223    2015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0925 11:29:40.661506    2015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0925 11:29:40.665412    2015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 11:29:40.669300    2015 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0925 11:29:40.669338    2015 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0925 11:29:40.673959    2015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 11:29:40.677529    2015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:29:40.748243    2015 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 11:29:40.759956    2015 start.go:495] detecting cgroup driver to use...
	I0925 11:29:40.760026    2015 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 11:29:40.766048    2015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 11:29:40.772047    2015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 11:29:40.780291    2015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 11:29:40.785745    2015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 11:29:40.791597    2015 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0925 11:29:40.829740    2015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 11:29:40.835824    2015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 11:29:40.842072    2015 ssh_runner.go:195] Run: which cri-dockerd
	I0925 11:29:40.843493    2015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 11:29:40.846532    2015 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0925 11:29:40.852646    2015 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 11:29:40.936415    2015 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 11:29:41.027163    2015 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 11:29:41.027233    2015 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0925 11:29:41.033485    2015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:29:41.095532    2015 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 11:29:43.271994    2015 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.176471125s)
	I0925 11:29:43.272084    2015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0925 11:29:43.277618    2015 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0925 11:29:43.284667    2015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0925 11:29:43.290122    2015 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0925 11:29:43.357147    2015 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 11:29:43.428619    2015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:29:43.497969    2015 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0925 11:29:43.504992    2015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0925 11:29:43.510603    2015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:29:43.575476    2015 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0925 11:29:43.601531    2015 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0925 11:29:43.601624    2015 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0925 11:29:43.604378    2015 start.go:563] Will wait 60s for crictl version
	I0925 11:29:43.604428    2015 ssh_runner.go:195] Run: which crictl
	I0925 11:29:43.605881    2015 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 11:29:43.625393    2015 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0925 11:29:43.625470    2015 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 11:29:43.635995    2015 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 11:29:43.650745    2015 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0925 11:29:43.650840    2015 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0925 11:29:43.652256    2015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 11:29:43.656986    2015 kubeadm.go:883] updating cluster {Name:addons-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0925 11:29:43.657036    2015 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 11:29:43.657086    2015 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 11:29:43.661999    2015 docker.go:685] Got preloaded images: 
	I0925 11:29:43.662007    2015 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0925 11:29:43.662053    2015 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 11:29:43.665707    2015 ssh_runner.go:195] Run: which lz4
	I0925 11:29:43.667097    2015 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0925 11:29:43.668434    2015 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0925 11:29:43.668445    2015 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322160019 bytes)
	I0925 11:29:44.917204    2015 docker.go:649] duration metric: took 1.250172542s to copy over tarball
	I0925 11:29:44.917267    2015 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0925 11:29:45.869668    2015 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0925 11:29:45.885041    2015 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 11:29:45.888962    2015 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0925 11:29:45.895190    2015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:29:45.979778    2015 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 11:29:48.866806    2015 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.887044209s)
	I0925 11:29:48.866935    2015 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 11:29:48.873295    2015 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0925 11:29:48.873306    2015 cache_images.go:84] Images are preloaded, skipping loading
	I0925 11:29:48.873310    2015 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.1 docker true true} ...
	I0925 11:29:48.873364    2015 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-587000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0925 11:29:48.873434    2015 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0925 11:29:48.892346    2015 cni.go:84] Creating CNI manager for ""
	I0925 11:29:48.892363    2015 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 11:29:48.892369    2015 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0925 11:29:48.892379    2015 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-587000 NodeName:addons-587000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 11:29:48.892450    2015 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-587000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 11:29:48.892534    2015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0925 11:29:48.896394    2015 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 11:29:48.896432    2015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 11:29:48.899707    2015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0925 11:29:48.905502    2015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 11:29:48.911391    2015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0925 11:29:48.917320    2015 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0925 11:29:48.918616    2015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 11:29:48.922504    2015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:29:48.991308    2015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0925 11:29:48.999037    2015 certs.go:68] Setting up /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000 for IP: 192.168.105.2
	I0925 11:29:48.999059    2015 certs.go:194] generating shared ca certs ...
	I0925 11:29:48.999070    2015 certs.go:226] acquiring lock for ca certs: {Name:mk58bb807ba332e9ca8b6e9b3a29d33fd7cd9838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:29:48.999282    2015 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.key
	I0925 11:29:49.046922    2015 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.crt ...
	I0925 11:29:49.046932    2015 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.crt: {Name:mk6c975363ad1374d2e5f39aff51c2fc36474d8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:29:49.047237    2015 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.key ...
	I0925 11:29:49.047241    2015 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.key: {Name:mkcaab9391e745ccba5f4c300099c6805f18070c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:29:49.047405    2015 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/proxy-client-ca.key
	I0925 11:29:49.274606    2015 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19681-1412/.minikube/proxy-client-ca.crt ...
	I0925 11:29:49.274617    2015 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/proxy-client-ca.crt: {Name:mk87a436f70b1ce0c8e2e936292406843f9aa965 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:29:49.274895    2015 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19681-1412/.minikube/proxy-client-ca.key ...
	I0925 11:29:49.274899    2015 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/proxy-client-ca.key: {Name:mkaf105c3eee0eb8fbe704b3af6587011d9d51d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:29:49.275061    2015 certs.go:256] generating profile certs ...
	I0925 11:29:49.275095    2015 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.key
	I0925 11:29:49.275103    2015 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt with IP's: []
	I0925 11:29:49.417367    2015 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt ...
	I0925 11:29:49.417372    2015 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: {Name:mk5b702b906d2b467c2f7de5412b9ce68560823b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:29:49.417534    2015 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.key ...
	I0925 11:29:49.417537    2015 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.key: {Name:mke7a1d2e157799fd53b0677a3bec9ad0a0dea1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:29:49.417651    2015 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/apiserver.key.72640846
	I0925 11:29:49.417660    2015 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/apiserver.crt.72640846 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0925 11:29:49.547621    2015 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/apiserver.crt.72640846 ...
	I0925 11:29:49.547625    2015 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/apiserver.crt.72640846: {Name:mk439f67121f3c9bca15012da0347ca74df07ba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:29:49.547769    2015 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/apiserver.key.72640846 ...
	I0925 11:29:49.547773    2015 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/apiserver.key.72640846: {Name:mk4c5d5517abfd335f1785a41b0238328e97ca42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:29:49.547901    2015 certs.go:381] copying /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/apiserver.crt.72640846 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/apiserver.crt
	I0925 11:29:49.548149    2015 certs.go:385] copying /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/apiserver.key.72640846 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/apiserver.key
	I0925 11:29:49.548303    2015 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/proxy-client.key
	I0925 11:29:49.548318    2015 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/proxy-client.crt with IP's: []
	I0925 11:29:49.631615    2015 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/proxy-client.crt ...
	I0925 11:29:49.631619    2015 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/proxy-client.crt: {Name:mk4cb56072aa49023842f20e469666c2d427fdde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:29:49.631769    2015 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/proxy-client.key ...
	I0925 11:29:49.631773    2015 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/proxy-client.key: {Name:mk32310acda8b4392de55d90e7cb4636c33c9746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:29:49.632035    2015 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca-key.pem (1679 bytes)
	I0925 11:29:49.632066    2015 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem (1082 bytes)
	I0925 11:29:49.632093    2015 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem (1123 bytes)
	I0925 11:29:49.632115    2015 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/key.pem (1675 bytes)
	I0925 11:29:49.632583    2015 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 11:29:49.643201    2015 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0925 11:29:49.652261    2015 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 11:29:49.662026    2015 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0925 11:29:49.670322    2015 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0925 11:29:49.678583    2015 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0925 11:29:49.686547    2015 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 11:29:49.694520    2015 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0925 11:29:49.702816    2015 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 11:29:49.711028    2015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 11:29:49.717956    2015 ssh_runner.go:195] Run: openssl version
	I0925 11:29:49.720293    2015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 11:29:49.724032    2015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 11:29:49.725750    2015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 25 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0925 11:29:49.725776    2015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 11:29:49.727940    2015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 11:29:49.731636    2015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0925 11:29:49.733088    2015 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0925 11:29:49.733138    2015 kubeadm.go:392] StartCluster: {Name:addons-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 11:29:49.733211    2015 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 11:29:49.738365    2015 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 11:29:49.742057    2015 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 11:29:49.745667    2015 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 11:29:49.749365    2015 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 11:29:49.749372    2015 kubeadm.go:157] found existing configuration files:
	
	I0925 11:29:49.749396    2015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0925 11:29:49.752832    2015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0925 11:29:49.752861    2015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0925 11:29:49.756572    2015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0925 11:29:49.759883    2015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0925 11:29:49.759910    2015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0925 11:29:49.763066    2015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0925 11:29:49.766225    2015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0925 11:29:49.766255    2015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0925 11:29:49.769601    2015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0925 11:29:49.773070    2015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0925 11:29:49.773099    2015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0925 11:29:49.776604    2015 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0925 11:29:49.799381    2015 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0925 11:29:49.799409    2015 kubeadm.go:310] [preflight] Running pre-flight checks
	I0925 11:29:49.844687    2015 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 11:29:49.844750    2015 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 11:29:49.844794    2015 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0925 11:29:49.848882    2015 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 11:29:49.869095    2015 out.go:235]   - Generating certificates and keys ...
	I0925 11:29:49.869129    2015 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0925 11:29:49.869166    2015 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0925 11:29:50.005803    2015 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0925 11:29:50.050668    2015 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0925 11:29:50.122431    2015 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0925 11:29:50.419791    2015 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0925 11:29:50.533562    2015 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0925 11:29:50.533624    2015 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-587000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0925 11:29:50.611627    2015 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0925 11:29:50.611697    2015 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-587000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0925 11:29:50.733470    2015 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0925 11:29:50.826440    2015 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0925 11:29:50.985147    2015 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0925 11:29:50.985189    2015 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 11:29:51.157600    2015 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 11:29:51.260338    2015 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0925 11:29:51.371407    2015 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 11:29:51.478262    2015 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 11:29:51.669015    2015 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 11:29:51.669282    2015 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 11:29:51.671567    2015 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 11:29:51.679660    2015 out.go:235]   - Booting up control plane ...
	I0925 11:29:51.679706    2015 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 11:29:51.679746    2015 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 11:29:51.679787    2015 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 11:29:51.679847    2015 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 11:29:51.682247    2015 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 11:29:51.682271    2015 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0925 11:29:51.763855    2015 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0925 11:29:51.763920    2015 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0925 11:29:52.270748    2015 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 506.285ms
	I0925 11:29:52.270984    2015 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0925 11:29:55.271954    2015 kubeadm.go:310] [api-check] The API server is healthy after 3.001768043s
	I0925 11:29:55.280478    2015 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 11:29:55.286673    2015 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 11:29:55.300528    2015 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 11:29:55.300637    2015 kubeadm.go:310] [mark-control-plane] Marking the node addons-587000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0925 11:29:55.304051    2015 kubeadm.go:310] [bootstrap-token] Using token: 39o631.kulg2l3wjj53g5p4
	I0925 11:29:55.315252    2015 out.go:235]   - Configuring RBAC rules ...
	I0925 11:29:55.315310    2015 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 11:29:55.315353    2015 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 11:29:55.316912    2015 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 11:29:55.317866    2015 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 11:29:55.319100    2015 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 11:29:55.320130    2015 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 11:29:55.687617    2015 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 11:29:56.085309    2015 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0925 11:29:56.678024    2015 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0925 11:29:56.679334    2015 kubeadm.go:310] 
	I0925 11:29:56.679429    2015 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0925 11:29:56.679438    2015 kubeadm.go:310] 
	I0925 11:29:56.679577    2015 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0925 11:29:56.679595    2015 kubeadm.go:310] 
	I0925 11:29:56.679634    2015 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0925 11:29:56.679716    2015 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 11:29:56.679796    2015 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 11:29:56.679804    2015 kubeadm.go:310] 
	I0925 11:29:56.679918    2015 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0925 11:29:56.679935    2015 kubeadm.go:310] 
	I0925 11:29:56.680005    2015 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0925 11:29:56.680025    2015 kubeadm.go:310] 
	I0925 11:29:56.680122    2015 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0925 11:29:56.680317    2015 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 11:29:56.680436    2015 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 11:29:56.680453    2015 kubeadm.go:310] 
	I0925 11:29:56.680605    2015 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 11:29:56.680701    2015 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0925 11:29:56.680738    2015 kubeadm.go:310] 
	I0925 11:29:56.680872    2015 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 39o631.kulg2l3wjj53g5p4 \
	I0925 11:29:56.681029    2015 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e51346daa4df67057de8045209492e1d5416aabfe1ee2597d0ef678584899cc1 \
	I0925 11:29:56.681078    2015 kubeadm.go:310] 	--control-plane 
	I0925 11:29:56.681085    2015 kubeadm.go:310] 
	I0925 11:29:56.681263    2015 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0925 11:29:56.681272    2015 kubeadm.go:310] 
	I0925 11:29:56.681365    2015 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 39o631.kulg2l3wjj53g5p4 \
	I0925 11:29:56.681504    2015 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e51346daa4df67057de8045209492e1d5416aabfe1ee2597d0ef678584899cc1 
	I0925 11:29:56.681880    2015 kubeadm.go:310] W0925 18:29:49.358175    1604 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0925 11:29:56.682320    2015 kubeadm.go:310] W0925 18:29:49.358573    1604 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0925 11:29:56.682474    2015 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 11:29:56.682490    2015 cni.go:84] Creating CNI manager for ""
	I0925 11:29:56.682508    2015 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 11:29:56.686638    2015 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 11:29:56.694799    2015 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 11:29:56.703408    2015 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0925 11:29:56.715315    2015 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 11:29:56.715416    2015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:29:56.715415    2015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-587000 minikube.k8s.io/updated_at=2024_09_25T11_29_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=cb9e6220ecbd737c1d09ad9630c6f144f437664a minikube.k8s.io/name=addons-587000 minikube.k8s.io/primary=true
	I0925 11:29:56.787291    2015 ops.go:34] apiserver oom_adj: -16
	I0925 11:29:56.787395    2015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:29:57.289475    2015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:29:57.788398    2015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:29:58.289539    2015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:29:58.789482    2015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:29:59.289512    2015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:29:59.787940    2015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:00.289566    2015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:00.789506    2015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:01.289560    2015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 11:30:01.354858    2015 kubeadm.go:1113] duration metric: took 4.639598625s to wait for elevateKubeSystemPrivileges
	I0925 11:30:01.354875    2015 kubeadm.go:394] duration metric: took 11.621882083s to StartCluster
	I0925 11:30:01.354885    2015 settings.go:142] acquiring lock: {Name:mk3a21ccfd977fa63a309ae265edad20537229ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:30:01.355043    2015 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 11:30:01.355265    2015 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/kubeconfig: {Name:mkc011f0309eba8a9546287478e16310d103c97e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:30:01.355496    2015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 11:30:01.355509    2015 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 11:30:01.355531    2015 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0925 11:30:01.355576    2015 addons.go:69] Setting yakd=true in profile "addons-587000"
	I0925 11:30:01.355584    2015 addons.go:234] Setting addon yakd=true in "addons-587000"
	I0925 11:30:01.355597    2015 host.go:66] Checking if "addons-587000" exists ...
	I0925 11:30:01.355597    2015 addons.go:69] Setting inspektor-gadget=true in profile "addons-587000"
	I0925 11:30:01.355604    2015 addons.go:234] Setting addon inspektor-gadget=true in "addons-587000"
	I0925 11:30:01.355612    2015 config.go:182] Loaded profile config "addons-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 11:30:01.355616    2015 host.go:66] Checking if "addons-587000" exists ...
	I0925 11:30:01.355640    2015 addons.go:69] Setting ingress=true in profile "addons-587000"
	I0925 11:30:01.355646    2015 addons.go:234] Setting addon ingress=true in "addons-587000"
	I0925 11:30:01.355650    2015 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-587000"
	I0925 11:30:01.355658    2015 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-587000"
	I0925 11:30:01.355663    2015 addons.go:69] Setting gcp-auth=true in profile "addons-587000"
	I0925 11:30:01.355669    2015 mustload.go:65] Loading cluster: addons-587000
	I0925 11:30:01.355673    2015 host.go:66] Checking if "addons-587000" exists ...
	I0925 11:30:01.355653    2015 addons.go:69] Setting default-storageclass=true in profile "addons-587000"
	I0925 11:30:01.355674    2015 addons.go:69] Setting ingress-dns=true in profile "addons-587000"
	I0925 11:30:01.355707    2015 addons.go:69] Setting cloud-spanner=true in profile "addons-587000"
	I0925 11:30:01.355712    2015 addons.go:234] Setting addon ingress-dns=true in "addons-587000"
	I0925 11:30:01.355719    2015 addons.go:69] Setting volumesnapshots=true in profile "addons-587000"
	I0925 11:30:01.355726    2015 addons.go:234] Setting addon volumesnapshots=true in "addons-587000"
	I0925 11:30:01.355733    2015 host.go:66] Checking if "addons-587000" exists ...
	I0925 11:30:01.355733    2015 addons.go:234] Setting addon cloud-spanner=true in "addons-587000"
	I0925 11:30:01.355735    2015 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-587000"
	I0925 11:30:01.355745    2015 config.go:182] Loaded profile config "addons-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 11:30:01.355763    2015 host.go:66] Checking if "addons-587000" exists ...
	I0925 11:30:01.355639    2015 addons.go:69] Setting storage-provisioner=true in profile "addons-587000"
	I0925 11:30:01.355772    2015 host.go:66] Checking if "addons-587000" exists ...
	I0925 11:30:01.355777    2015 addons.go:234] Setting addon storage-provisioner=true in "addons-587000"
	I0925 11:30:01.355797    2015 host.go:66] Checking if "addons-587000" exists ...
	I0925 11:30:01.355692    2015 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-587000"
	I0925 11:30:01.355911    2015 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-587000"
	I0925 11:30:01.355659    2015 host.go:66] Checking if "addons-587000" exists ...
	I0925 11:30:01.356007    2015 retry.go:31] will retry after 745.01589ms: connect: dial unix /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/monitor: connect: connection refused
	I0925 11:30:01.356017    2015 retry.go:31] will retry after 512.351625ms: connect: dial unix /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/monitor: connect: connection refused
	I0925 11:30:01.356022    2015 addons.go:69] Setting registry=true in profile "addons-587000"
	I0925 11:30:01.356026    2015 addons.go:234] Setting addon registry=true in "addons-587000"
	I0925 11:30:01.356033    2015 host.go:66] Checking if "addons-587000" exists ...
	I0925 11:30:01.356060    2015 retry.go:31] will retry after 1.08465481s: connect: dial unix /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/monitor: connect: connection refused
	I0925 11:30:01.356094    2015 retry.go:31] will retry after 790.84802ms: connect: dial unix /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/monitor: connect: connection refused
	I0925 11:30:01.355646    2015 addons.go:69] Setting metrics-server=true in profile "addons-587000"
	I0925 11:30:01.356115    2015 addons.go:234] Setting addon metrics-server=true in "addons-587000"
	I0925 11:30:01.356124    2015 host.go:66] Checking if "addons-587000" exists ...
	I0925 11:30:01.356240    2015 retry.go:31] will retry after 1.258647888s: connect: dial unix /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/monitor: connect: connection refused
	I0925 11:30:01.356291    2015 retry.go:31] will retry after 1.202746196s: connect: dial unix /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/monitor: connect: connection refused
	I0925 11:30:01.355765    2015 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-587000"
	I0925 11:30:01.356307    2015 host.go:66] Checking if "addons-587000" exists ...
	I0925 11:30:01.356354    2015 retry.go:31] will retry after 1.212777631s: connect: dial unix /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/monitor: connect: connection refused
	I0925 11:30:01.355697    2015 addons.go:69] Setting volcano=true in profile "addons-587000"
	I0925 11:30:01.356363    2015 addons.go:234] Setting addon volcano=true in "addons-587000"
	I0925 11:30:01.356384    2015 host.go:66] Checking if "addons-587000" exists ...
	I0925 11:30:01.356435    2015 retry.go:31] will retry after 1.454098549s: connect: dial unix /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/monitor: connect: connection refused
	I0925 11:30:01.355695    2015 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-587000"
	I0925 11:30:01.356468    2015 retry.go:31] will retry after 772.96524ms: connect: dial unix /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/monitor: connect: connection refused
	I0925 11:30:01.356522    2015 retry.go:31] will retry after 1.113905589s: connect: dial unix /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/monitor: connect: connection refused
	I0925 11:30:01.356564    2015 retry.go:31] will retry after 736.47141ms: connect: dial unix /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/monitor: connect: connection refused
	I0925 11:30:01.356609    2015 retry.go:31] will retry after 1.282761431s: connect: dial unix /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/monitor: connect: connection refused
	I0925 11:30:01.356632    2015 retry.go:31] will retry after 954.772689ms: connect: dial unix /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/monitor: connect: connection refused
	I0925 11:30:01.356863    2015 host.go:66] Checking if "addons-587000" exists ...
	I0925 11:30:01.359806    2015 out.go:177] * Verifying Kubernetes components...
	I0925 11:30:01.367841    2015 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0925 11:30:01.371806    2015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 11:30:01.376751    2015 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0925 11:30:01.376887    2015 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0925 11:30:01.376898    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:30:01.422046    2015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 11:30:01.478904    2015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0925 11:30:01.551640    2015 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0925 11:30:01.551653    2015 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0925 11:30:01.557041    2015 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0925 11:30:01.557047    2015 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0925 11:30:01.563076    2015 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0925 11:30:01.563088    2015 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0925 11:30:01.575341    2015 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0925 11:30:01.575356    2015 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0925 11:30:01.583313    2015 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0925 11:30:01.583764    2015 node_ready.go:35] waiting up to 6m0s for node "addons-587000" to be "Ready" ...
	I0925 11:30:01.585482    2015 node_ready.go:49] node "addons-587000" has status "Ready":"True"
	I0925 11:30:01.585500    2015 node_ready.go:38] duration metric: took 1.716333ms for node "addons-587000" to be "Ready" ...
	I0925 11:30:01.585504    2015 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:30:01.589908    2015 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-587000" in "kube-system" namespace to be "Ready" ...
	I0925 11:30:01.594583    2015 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0925 11:30:01.594595    2015 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0925 11:30:01.601645    2015 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0925 11:30:01.601658    2015 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0925 11:30:01.607672    2015 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0925 11:30:01.607677    2015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0925 11:30:01.613389    2015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0925 11:30:01.873348    2015 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0925 11:30:01.877257    2015 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0925 11:30:01.877264    2015 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0925 11:30:01.877273    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:30:01.910317    2015 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0925 11:30:01.910329    2015 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0925 11:30:01.928607    2015 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0925 11:30:01.928621    2015 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0925 11:30:01.934569    2015 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0925 11:30:01.934579    2015 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0925 11:30:01.940425    2015 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0925 11:30:01.940432    2015 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0925 11:30:01.946211    2015 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0925 11:30:01.946217    2015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0925 11:30:01.951799    2015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0925 11:30:02.088644    2015 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-587000" context rescaled to 1 replicas
	I0925 11:30:02.094654    2015 addons.go:234] Setting addon default-storageclass=true in "addons-587000"
	I0925 11:30:02.094675    2015 host.go:66] Checking if "addons-587000" exists ...
	I0925 11:30:02.095352    2015 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 11:30:02.095361    2015 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 11:30:02.095367    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:30:02.106587    2015 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0925 11:30:02.110572    2015 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0925 11:30:02.110580    2015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0925 11:30:02.110590    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:30:02.138697    2015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 11:30:02.162467    2015 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0925 11:30:02.166535    2015 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0925 11:30:02.170577    2015 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0925 11:30:02.170587    2015 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0925 11:30:02.170601    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:30:02.173581    2015 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0925 11:30:02.173589    2015 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0925 11:30:02.173597    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:30:02.178744    2015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0925 11:30:02.295621    2015 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0925 11:30:02.295635    2015 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0925 11:30:02.317845    2015 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0925 11:30:02.324790    2015 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0925 11:30:02.331051    2015 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0925 11:30:02.331063    2015 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0925 11:30:02.331091    2015 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0925 11:30:02.331096    2015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0925 11:30:02.331743    2015 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0925 11:30:02.338794    2015 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0925 11:30:02.346765    2015 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0925 11:30:02.350807    2015 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0925 11:30:02.354810    2015 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0925 11:30:02.358787    2015 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0925 11:30:02.362264    2015 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0925 11:30:02.362273    2015 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0925 11:30:02.362775    2015 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0925 11:30:02.362781    2015 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0925 11:30:02.362790    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:30:02.408601    2015 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0925 11:30:02.408616    2015 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0925 11:30:02.443763    2015 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-587000"
	I0925 11:30:02.443793    2015 host.go:66] Checking if "addons-587000" exists ...
	I0925 11:30:02.446441    2015 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0925 11:30:02.453809    2015 out.go:177]   - Using image docker.io/busybox:stable
	I0925 11:30:02.456901    2015 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0925 11:30:02.456913    2015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0925 11:30:02.456923    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:30:02.475870    2015 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 11:30:02.479726    2015 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 11:30:02.479739    2015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 11:30:02.479752    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:30:02.491811    2015 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 11:30:02.491822    2015 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0925 11:30:02.495816    2015 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0925 11:30:02.495826    2015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0925 11:30:02.512379    2015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 11:30:02.516073    2015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0925 11:30:02.557313    2015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0925 11:30:02.563730    2015 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0925 11:30:02.566837    2015 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0925 11:30:02.566848    2015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0925 11:30:02.566859    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:30:02.573801    2015 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0925 11:30:02.577841    2015 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0925 11:30:02.577852    2015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0925 11:30:02.577863    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:30:02.579276    2015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 11:30:02.605842    2015 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0925 11:30:02.605856    2015 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0925 11:30:02.619726    2015 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0925 11:30:02.629831    2015 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0925 11:30:02.636636    2015 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0925 11:30:02.640898    2015 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0925 11:30:02.640913    2015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0925 11:30:02.640924    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:30:02.645774    2015 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0925 11:30:02.649793    2015 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0925 11:30:02.653812    2015 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0925 11:30:02.657282    2015 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0925 11:30:02.657290    2015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0925 11:30:02.657300    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:30:02.675051    2015 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0925 11:30:02.675065    2015 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0925 11:30:02.745261    2015 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0925 11:30:02.745278    2015 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0925 11:30:02.764579    2015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0925 11:30:02.770546    2015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0925 11:30:02.774181    2015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0925 11:30:02.790344    2015 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0925 11:30:02.790359    2015 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0925 11:30:02.814625    2015 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0925 11:30:02.818787    2015 out.go:177]   - Using image docker.io/registry:2.8.3
	I0925 11:30:02.822836    2015 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0925 11:30:02.822844    2015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0925 11:30:02.822855    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:30:02.831273    2015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0925 11:30:02.894175    2015 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0925 11:30:02.894189    2015 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0925 11:30:02.975923    2015 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0925 11:30:02.975932    2015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0925 11:30:03.089185    2015 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0925 11:30:03.089198    2015 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0925 11:30:03.161662    2015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.022961625s)
	I0925 11:30:03.161682    2015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.209885333s)
	W0925 11:30:03.161693    2015 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0925 11:30:03.161707    2015 retry.go:31] will retry after 370.092056ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0925 11:30:03.180672    2015 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0925 11:30:03.180688    2015 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0925 11:30:03.222000    2015 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0925 11:30:03.222009    2015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0925 11:30:03.322713    2015 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0925 11:30:03.322723    2015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0925 11:30:03.344535    2015 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0925 11:30:03.344547    2015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0925 11:30:03.378305    2015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0925 11:30:03.383286    2015 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0925 11:30:03.383299    2015 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0925 11:30:03.439854    2015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0925 11:30:03.533925    2015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0925 11:30:03.605330    2015 pod_ready.go:103] pod "etcd-addons-587000" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:03.748270    2015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.232194625s)
	I0925 11:30:03.748304    2015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.235926417s)
	I0925 11:30:03.748313    2015 addons.go:475] Verifying addon metrics-server=true in "addons-587000"
	I0925 11:30:03.751445    2015 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-587000 service yakd-dashboard -n yakd-dashboard
	
	I0925 11:30:03.922124    2015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.364807458s)
	I0925 11:30:03.922163    2015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.157589542s)
	I0925 11:30:03.922177    2015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.151637583s)
	I0925 11:30:03.922132    2015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.342862083s)
	I0925 11:30:05.626204    2015 pod_ready.go:103] pod "etcd-addons-587000" in "kube-system" namespace has status "Ready":"False"
	I0925 11:30:05.641537    2015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.867376959s)
	I0925 11:30:05.641556    2015 addons.go:475] Verifying addon ingress=true in "addons-587000"
	I0925 11:30:05.648388    2015 out.go:177] * Verifying ingress addon...
	I0925 11:30:05.652673    2015 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0925 11:30:05.654817    2015 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0925 11:30:05.654823    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:06.156976    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:06.658586    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:06.846369    2015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (4.015127167s)
	I0925 11:30:06.846375    2015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.468099667s)
	I0925 11:30:06.846638    2015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.406811042s)
	I0925 11:30:06.846662    2015 addons.go:475] Verifying addon registry=true in "addons-587000"
	I0925 11:30:06.846676    2015 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-587000"
	I0925 11:30:06.846770    2015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.312848875s)
	I0925 11:30:06.852514    2015 out.go:177] * Verifying registry addon...
	I0925 11:30:06.852542    2015 out.go:177] * Verifying csi-hostpath-driver addon...
	I0925 11:30:06.863036    2015 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0925 11:30:06.870977    2015 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0925 11:30:06.888817    2015 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0925 11:30:06.888826    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:06.889194    2015 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0925 11:30:06.889202    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:07.157054    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:07.367401    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:07.374327    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:07.593883    2015 pod_ready.go:93] pod "etcd-addons-587000" in "kube-system" namespace has status "Ready":"True"
	I0925 11:30:07.593892    2015 pod_ready.go:82] duration metric: took 6.004046666s for pod "etcd-addons-587000" in "kube-system" namespace to be "Ready" ...
	I0925 11:30:07.593896    2015 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-587000" in "kube-system" namespace to be "Ready" ...
	I0925 11:30:07.596039    2015 pod_ready.go:93] pod "kube-apiserver-addons-587000" in "kube-system" namespace has status "Ready":"True"
	I0925 11:30:07.596044    2015 pod_ready.go:82] duration metric: took 2.144542ms for pod "kube-apiserver-addons-587000" in "kube-system" namespace to be "Ready" ...
	I0925 11:30:07.596047    2015 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-587000" in "kube-system" namespace to be "Ready" ...
	I0925 11:30:07.657227    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:07.865013    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:07.874356    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:08.156738    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:08.363821    2015 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0925 11:30:08.363837    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:30:08.364867    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:08.373148    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:08.396770    2015 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0925 11:30:08.403760    2015 addons.go:234] Setting addon gcp-auth=true in "addons-587000"
	I0925 11:30:08.403781    2015 host.go:66] Checking if "addons-587000" exists ...
	I0925 11:30:08.404508    2015 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0925 11:30:08.404516    2015 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/addons-587000/id_rsa Username:docker}
	I0925 11:30:08.435944    2015 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0925 11:30:08.439951    2015 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0925 11:30:08.443757    2015 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0925 11:30:08.443764    2015 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0925 11:30:08.450833    2015 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0925 11:30:08.450839    2015 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0925 11:30:08.457199    2015 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0925 11:30:08.457205    2015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0925 11:30:08.465004    2015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0925 11:30:08.600850    2015 pod_ready.go:93] pod "kube-controller-manager-addons-587000" in "kube-system" namespace has status "Ready":"True"
	I0925 11:30:08.600859    2015 pod_ready.go:82] duration metric: took 1.004820916s for pod "kube-controller-manager-addons-587000" in "kube-system" namespace to be "Ready" ...
	I0925 11:30:08.600864    2015 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-587000" in "kube-system" namespace to be "Ready" ...
	I0925 11:30:08.602878    2015 pod_ready.go:93] pod "kube-scheduler-addons-587000" in "kube-system" namespace has status "Ready":"True"
	I0925 11:30:08.602884    2015 pod_ready.go:82] duration metric: took 2.016667ms for pod "kube-scheduler-addons-587000" in "kube-system" namespace to be "Ready" ...
	I0925 11:30:08.602887    2015 pod_ready.go:39] duration metric: took 7.017464542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 11:30:08.602897    2015 api_server.go:52] waiting for apiserver process to appear ...
	I0925 11:30:08.602956    2015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 11:30:08.656774    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:08.674919    2015 api_server.go:72] duration metric: took 7.319484792s to wait for apiserver process to appear ...
	I0925 11:30:08.674935    2015 api_server.go:88] waiting for apiserver healthz status ...
	I0925 11:30:08.674945    2015 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0925 11:30:08.675759    2015 addons.go:475] Verifying addon gcp-auth=true in "addons-587000"
	I0925 11:30:08.678057    2015 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0925 11:30:08.678561    2015 api_server.go:141] control plane version: v1.31.1
	I0925 11:30:08.678567    2015 api_server.go:131] duration metric: took 3.628958ms to wait for apiserver health ...
	I0925 11:30:08.678571    2015 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 11:30:08.680451    2015 out.go:177] * Verifying gcp-auth addon...
	I0925 11:30:08.686702    2015 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0925 11:30:08.756614    2015 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0925 11:30:08.761420    2015 system_pods.go:59] 17 kube-system pods found
	I0925 11:30:08.761435    2015 system_pods.go:61] "coredns-7c65d6cfc9-8l5nn" [59fcfdff-682e-4942-8611-3713dff2b08f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:30:08.761440    2015 system_pods.go:61] "csi-hostpath-attacher-0" [c6f8d9f8-6c2c-48fc-8a39-2165b779fda4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0925 11:30:08.761443    2015 system_pods.go:61] "csi-hostpath-resizer-0" [088fb5d5-0c45-4298-9b25-a4ef771d6a24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0925 11:30:08.761446    2015 system_pods.go:61] "csi-hostpathplugin-bt2vs" [794eda7d-119d-4c01-99c7-0c8073dca48e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0925 11:30:08.761449    2015 system_pods.go:61] "etcd-addons-587000" [73a42966-3b42-48da-b707-98c1fb483644] Running
	I0925 11:30:08.761451    2015 system_pods.go:61] "kube-apiserver-addons-587000" [e96921b2-9a3f-4462-937c-a7e52ca2c3a1] Running
	I0925 11:30:08.761453    2015 system_pods.go:61] "kube-controller-manager-addons-587000" [dfb8726e-82f8-4c22-9de2-8fc7fd551793] Running
	I0925 11:30:08.761456    2015 system_pods.go:61] "kube-ingress-dns-minikube" [f2c204eb-7e76-4d1d-9b6a-cdd624ebec84] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0925 11:30:08.761459    2015 system_pods.go:61] "kube-proxy-xc7t5" [73838f7e-a5e2-418b-bcfe-f3dcd5a5dc02] Running
	I0925 11:30:08.761460    2015 system_pods.go:61] "kube-scheduler-addons-587000" [3c81e6ec-4405-45b1-bc3c-c1e50658916d] Running
	I0925 11:30:08.761463    2015 system_pods.go:61] "metrics-server-84c5f94fbc-wbcvk" [bcee8b88-faae-4d9c-97d7-faf4a0f60c4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:30:08.761466    2015 system_pods.go:61] "nvidia-device-plugin-daemonset-t8p54" [c38de65c-7c82-40c9-822e-d674e05e12fd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0925 11:30:08.761468    2015 system_pods.go:61] "registry-66c9cd494c-j9gg5" [cd05e219-d06b-4852-a6f2-4a3231bd632b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0925 11:30:08.761470    2015 system_pods.go:61] "registry-proxy-6zqqk" [0581ec26-3502-4a43-9102-5a41bcd2e80c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0925 11:30:08.761473    2015 system_pods.go:61] "snapshot-controller-56fcc65765-c2dls" [2ce87c61-fcae-436d-a09f-468f766c3850] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0925 11:30:08.761475    2015 system_pods.go:61] "snapshot-controller-56fcc65765-t7fvh" [2903c55d-abf2-439c-b62f-3cc403002806] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0925 11:30:08.761477    2015 system_pods.go:61] "storage-provisioner" [3d39b2e9-aedc-4465-8298-550aa16ef15f] Running
	I0925 11:30:08.761480    2015 system_pods.go:74] duration metric: took 82.906667ms to wait for pod list to return data ...
	I0925 11:30:08.761484    2015 default_sa.go:34] waiting for default service account to be created ...
	I0925 11:30:08.762628    2015 default_sa.go:45] found service account: "default"
	I0925 11:30:08.762634    2015 default_sa.go:55] duration metric: took 1.147584ms for default service account to be created ...
	I0925 11:30:08.762637    2015 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 11:30:08.767042    2015 system_pods.go:86] 17 kube-system pods found
	I0925 11:30:08.767052    2015 system_pods.go:89] "coredns-7c65d6cfc9-8l5nn" [59fcfdff-682e-4942-8611-3713dff2b08f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 11:30:08.767057    2015 system_pods.go:89] "csi-hostpath-attacher-0" [c6f8d9f8-6c2c-48fc-8a39-2165b779fda4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0925 11:30:08.767060    2015 system_pods.go:89] "csi-hostpath-resizer-0" [088fb5d5-0c45-4298-9b25-a4ef771d6a24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0925 11:30:08.767063    2015 system_pods.go:89] "csi-hostpathplugin-bt2vs" [794eda7d-119d-4c01-99c7-0c8073dca48e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0925 11:30:08.767065    2015 system_pods.go:89] "etcd-addons-587000" [73a42966-3b42-48da-b707-98c1fb483644] Running
	I0925 11:30:08.767067    2015 system_pods.go:89] "kube-apiserver-addons-587000" [e96921b2-9a3f-4462-937c-a7e52ca2c3a1] Running
	I0925 11:30:08.767069    2015 system_pods.go:89] "kube-controller-manager-addons-587000" [dfb8726e-82f8-4c22-9de2-8fc7fd551793] Running
	I0925 11:30:08.767071    2015 system_pods.go:89] "kube-ingress-dns-minikube" [f2c204eb-7e76-4d1d-9b6a-cdd624ebec84] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0925 11:30:08.767072    2015 system_pods.go:89] "kube-proxy-xc7t5" [73838f7e-a5e2-418b-bcfe-f3dcd5a5dc02] Running
	I0925 11:30:08.767074    2015 system_pods.go:89] "kube-scheduler-addons-587000" [3c81e6ec-4405-45b1-bc3c-c1e50658916d] Running
	I0925 11:30:08.767077    2015 system_pods.go:89] "metrics-server-84c5f94fbc-wbcvk" [bcee8b88-faae-4d9c-97d7-faf4a0f60c4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 11:30:08.767080    2015 system_pods.go:89] "nvidia-device-plugin-daemonset-t8p54" [c38de65c-7c82-40c9-822e-d674e05e12fd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0925 11:30:08.767083    2015 system_pods.go:89] "registry-66c9cd494c-j9gg5" [cd05e219-d06b-4852-a6f2-4a3231bd632b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0925 11:30:08.767085    2015 system_pods.go:89] "registry-proxy-6zqqk" [0581ec26-3502-4a43-9102-5a41bcd2e80c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0925 11:30:08.767099    2015 system_pods.go:89] "snapshot-controller-56fcc65765-c2dls" [2ce87c61-fcae-436d-a09f-468f766c3850] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0925 11:30:08.767103    2015 system_pods.go:89] "snapshot-controller-56fcc65765-t7fvh" [2903c55d-abf2-439c-b62f-3cc403002806] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0925 11:30:08.767112    2015 system_pods.go:89] "storage-provisioner" [3d39b2e9-aedc-4465-8298-550aa16ef15f] Running
	I0925 11:30:08.767117    2015 system_pods.go:126] duration metric: took 4.477584ms to wait for k8s-apps to be running ...
	I0925 11:30:08.767121    2015 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 11:30:08.767185    2015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 11:30:08.772605    2015 system_svc.go:56] duration metric: took 5.481708ms WaitForService to wait for kubelet
	I0925 11:30:08.772616    2015 kubeadm.go:582] duration metric: took 7.417185375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 11:30:08.772627    2015 node_conditions.go:102] verifying NodePressure condition ...
	I0925 11:30:08.794802    2015 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0925 11:30:08.794813    2015 node_conditions.go:123] node cpu capacity is 2
	I0925 11:30:08.794820    2015 node_conditions.go:105] duration metric: took 22.1905ms to run NodePressure ...
	I0925 11:30:08.794825    2015 start.go:241] waiting for startup goroutines ...
	I0925 11:30:08.866817    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:08.873044    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:09.157251    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:09.366924    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:09.373907    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:09.656817    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:09.867916    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:09.875864    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:10.158655    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:10.367802    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:10.374394    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:10.656573    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:10.866806    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:10.873864    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:11.157008    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:11.366285    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:11.374363    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:11.656606    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:11.866906    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:11.873923    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:12.156922    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:12.365217    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:12.373051    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:12.656768    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:12.865285    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:12.874142    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:13.156869    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:13.367017    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:13.374348    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:13.656698    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:13.867287    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:13.874734    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:14.155354    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:14.366816    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:14.374543    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:14.656569    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:14.867430    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:14.875028    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:15.156808    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:15.366710    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:15.374206    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:15.656786    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:15.970610    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:15.971099    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:16.156709    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:16.366928    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:16.373672    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:16.656759    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:16.866798    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:16.873617    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:17.156755    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:17.366710    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:17.373823    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:17.656712    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:17.866717    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:17.873711    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:18.156579    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:18.366951    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:18.373694    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:18.656618    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:18.867850    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:18.875911    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:19.158118    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:19.366678    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:19.373635    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:19.656743    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:19.866444    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:19.873824    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:20.156957    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:20.366875    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:20.373862    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:20.656672    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:20.868487    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:20.873581    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:21.156477    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:21.365324    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:21.374010    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:21.656692    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:21.866741    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:21.873545    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:22.156860    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:22.365231    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:22.374207    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:22.656665    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:22.866732    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:22.874746    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:23.156724    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:23.366634    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:23.372935    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:23.656621    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:23.866870    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:23.873659    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:24.156627    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:24.366612    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:24.373996    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:24.656332    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:24.865832    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:24.874359    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:25.156691    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:25.366640    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:25.373134    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:25.658064    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:25.866644    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:25.873883    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:26.156998    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:26.365597    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:26.373769    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:26.656752    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:26.864841    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:26.872921    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:27.156571    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:27.367395    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:27.373789    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:27.656848    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:27.866845    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:27.873719    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:28.160156    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:28.368201    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:28.379051    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:28.657309    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:28.873169    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:28.879217    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:29.158675    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:29.373448    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:29.377835    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:29.656607    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:29.866804    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:29.873596    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:30.156351    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:30.365100    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:30.466936    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:30.656601    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:30.866992    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:30.874099    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:31.156838    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:31.365036    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:31.373884    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:31.656430    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:31.866811    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:31.873974    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:32.156527    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:32.366767    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:32.373554    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:32.656340    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:32.866307    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:32.873639    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:33.156511    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:33.367596    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:33.375490    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:33.656475    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:33.866459    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:33.874118    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:34.156977    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:34.364811    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:34.373997    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:34.656889    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:34.868511    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:34.877010    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:35.165038    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:35.367391    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:35.372976    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:35.656613    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:35.884749    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:35.885486    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:36.156790    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:36.365928    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:36.374268    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:36.657006    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:36.868318    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:36.874220    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:37.157408    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:37.369074    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:37.373087    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:37.655085    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:37.864537    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:37.873164    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:38.156725    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:38.366661    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:38.373379    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:38.656431    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:38.866586    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:38.873557    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:39.156417    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:39.429474    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:39.429725    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:39.656843    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:39.866680    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:39.873426    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:40.178380    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:40.365912    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 11:30:40.373652    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:40.657369    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:40.865043    2015 kapi.go:107] duration metric: took 34.00242275s to wait for kubernetes.io/minikube-addons=registry ...
	I0925 11:30:40.872998    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:41.156569    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:41.374312    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:41.656167    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:41.874606    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:42.156339    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:42.375420    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:42.656066    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:42.875053    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:43.156365    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:43.375595    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:43.656251    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:43.875440    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:44.156157    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:44.375343    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:44.658365    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:44.873263    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:45.161087    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:45.375654    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:45.656575    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:45.876475    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:46.156351    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:46.375374    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:46.656147    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:46.876226    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:47.158608    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:47.375957    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:47.656315    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:47.873549    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:48.157430    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:48.374802    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:48.656230    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:48.876334    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:49.156440    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:49.375056    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:49.656421    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:49.874827    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:50.157946    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:50.374893    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:50.656319    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:50.874798    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:51.156426    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:51.374637    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:51.656206    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:51.873765    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:52.156698    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:52.378323    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:52.657439    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:52.875081    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:53.156955    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:53.376314    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:53.656139    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:53.873511    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:54.156332    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:54.378214    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:54.657415    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:54.876046    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:55.156797    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:55.376909    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:55.656708    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:55.875386    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:56.156196    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:56.375213    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:56.656208    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:56.876899    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:57.156693    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:57.375398    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:57.656315    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:57.873219    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:58.156223    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:58.375026    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:58.655439    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:58.875025    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:59.156150    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:59.374775    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:30:59.656319    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:30:59.874806    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:31:00.156285    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:00.373416    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:31:00.656417    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:00.874829    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:31:01.156055    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:01.374305    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:31:01.656022    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:01.874737    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:31:02.156385    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:02.374106    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:31:02.656223    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:02.874947    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:31:03.158505    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:03.376992    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:31:03.655961    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:03.874590    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:31:04.157163    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:04.374034    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:31:04.656446    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:04.875930    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:31:05.154219    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:05.374781    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:31:05.656782    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:05.873835    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:31:06.156076    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:06.462943    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:31:06.656358    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:06.874772    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:31:07.161629    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:07.374979    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:31:07.655878    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:07.874708    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:31:08.156056    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:08.373575    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 11:31:08.656546    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:08.874867    2015 kapi.go:107] duration metric: took 1m2.004653167s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0925 11:31:09.157900    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:09.656496    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:10.155986    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:10.656083    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:11.156009    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:11.656090    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:12.154404    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:12.656250    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:13.155609    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:13.655751    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:14.154041    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:14.656328    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:15.156106    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:15.656153    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:16.156151    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:16.656270    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:17.156324    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:17.656051    2015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 11:31:18.156190    2015 kapi.go:107] duration metric: took 1m12.504409166s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0925 11:31:30.690865    2015 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0925 11:31:30.690877    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:31.191295    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:31.696155    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:32.193045    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:32.690645    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:33.189693    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:33.689053    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:34.190995    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:34.689647    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:35.195395    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:35.693466    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:36.189256    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:36.689843    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:37.193729    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:37.692925    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:38.197489    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:38.694605    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:39.195692    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:39.694166    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:40.194046    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:40.690874    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:41.196523    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:41.693926    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:42.195556    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:42.691456    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:43.192954    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:43.690888    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:44.194019    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:44.689994    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:45.196762    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:45.691400    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:46.190810    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:46.690232    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:47.196583    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:47.693730    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:48.190612    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:48.689283    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:49.190389    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:49.689348    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:50.190812    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:50.690294    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:51.195346    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:51.690851    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:52.190190    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:52.689813    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:53.192783    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:53.689396    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:54.186909    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:54.685293    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:55.184963    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:55.684345    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:56.181480    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:56.680664    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:57.186154    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:57.686713    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:58.184785    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:58.677451    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:59.176230    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:31:59.675016    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:00.174477    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:00.673584    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:01.176322    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:01.676220    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:02.172092    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:02.672326    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:03.170626    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:03.672343    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:04.168041    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:04.667121    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:05.167519    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:05.669859    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:06.165480    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:06.664855    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:07.165583    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:07.667637    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:08.168641    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:08.667445    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:09.165279    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:09.662654    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:10.162662    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:10.666973    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:11.163181    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:11.663833    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:12.159094    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:12.659752    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:13.158915    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:13.664094    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:14.157873    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:14.658503    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:15.156174    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:15.657999    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:16.156695    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:16.656951    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:17.158588    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:17.660246    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:18.157489    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:18.656536    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:19.155293    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:19.656143    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:20.155524    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:20.658171    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:21.155992    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:21.657416    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:22.163295    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:22.654367    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:23.157690    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:23.659510    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:24.155557    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:24.657231    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:25.159203    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:25.654513    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:26.152020    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:26.651767    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:27.153433    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:27.654799    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:28.155647    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:28.655087    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:29.153177    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:29.652325    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:30.152393    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:30.651431    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:31.152080    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:31.650229    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:32.151477    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:32.650395    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:33.150375    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:33.650602    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:34.151505    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:34.652601    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:35.151765    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:35.650823    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:36.149297    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:36.649042    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:37.149391    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:37.648969    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:38.148854    2015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 11:32:38.649186    2015 kapi.go:107] duration metric: took 2m30.0040965s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0925 11:32:38.655752    2015 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-587000 cluster.
	I0925 11:32:38.660685    2015 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0925 11:32:38.665668    2015 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0925 11:32:38.670742    2015 out.go:177] * Enabled addons: inspektor-gadget, nvidia-device-plugin, default-storageclass, metrics-server, yakd, ingress-dns, cloud-spanner, storage-provisioner, storage-provisioner-rancher, volcano, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0925 11:32:38.673678    2015 addons.go:510] duration metric: took 2m37.35985825s for enable addons: enabled=[inspektor-gadget nvidia-device-plugin default-storageclass metrics-server yakd ingress-dns cloud-spanner storage-provisioner storage-provisioner-rancher volcano volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0925 11:32:38.673701    2015 start.go:246] waiting for cluster config update ...
	I0925 11:32:38.673715    2015 start.go:255] writing updated cluster config ...
	I0925 11:32:38.674203    2015 ssh_runner.go:195] Run: rm -f paused
	I0925 11:32:38.829708    2015 start.go:600] kubectl: 1.30.2, cluster: 1.31.1 (minor skew: 1)
	I0925 11:32:38.832666    2015 out.go:177] * Done! kubectl is now configured to use "addons-587000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 25 18:42:04 addons-587000 dockerd[1294]: time="2024-09-25T18:42:04.406281766Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 18:42:04 addons-587000 dockerd[1294]: time="2024-09-25T18:42:04.411775363Z" level=warning msg="cleanup warnings time=\"2024-09-25T18:42:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 25 18:42:12 addons-587000 dockerd[1288]: time="2024-09-25T18:42:12.168991044Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=231ca8c4441f4c48 traceID=72b5f3dc596a051f59b1627b027f5132
	Sep 25 18:42:12 addons-587000 dockerd[1288]: time="2024-09-25T18:42:12.170590232Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=231ca8c4441f4c48 traceID=72b5f3dc596a051f59b1627b027f5132
	Sep 25 18:42:29 addons-587000 dockerd[1288]: time="2024-09-25T18:42:29.307427969Z" level=info msg="ignoring event" container=04dc49bc186fc2b2b2744c5d06d8a04471db881f4d93ed651bfbc554b6338a48 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 18:42:29 addons-587000 dockerd[1294]: time="2024-09-25T18:42:29.307689758Z" level=info msg="shim disconnected" id=04dc49bc186fc2b2b2744c5d06d8a04471db881f4d93ed651bfbc554b6338a48 namespace=moby
	Sep 25 18:42:29 addons-587000 dockerd[1294]: time="2024-09-25T18:42:29.307849506Z" level=warning msg="cleaning up after shim disconnected" id=04dc49bc186fc2b2b2744c5d06d8a04471db881f4d93ed651bfbc554b6338a48 namespace=moby
	Sep 25 18:42:29 addons-587000 dockerd[1294]: time="2024-09-25T18:42:29.307861631Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 18:42:29 addons-587000 dockerd[1294]: time="2024-09-25T18:42:29.467412375Z" level=info msg="shim disconnected" id=ad5bb0b6e01e62768d533ce9886f1d50dffff00bd9076a68b3213a176373f7b7 namespace=moby
	Sep 25 18:42:29 addons-587000 dockerd[1294]: time="2024-09-25T18:42:29.467489166Z" level=warning msg="cleaning up after shim disconnected" id=ad5bb0b6e01e62768d533ce9886f1d50dffff00bd9076a68b3213a176373f7b7 namespace=moby
	Sep 25 18:42:29 addons-587000 dockerd[1294]: time="2024-09-25T18:42:29.467508457Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 18:42:29 addons-587000 dockerd[1288]: time="2024-09-25T18:42:29.467696288Z" level=info msg="ignoring event" container=ad5bb0b6e01e62768d533ce9886f1d50dffff00bd9076a68b3213a176373f7b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 18:42:29 addons-587000 dockerd[1294]: time="2024-09-25T18:42:29.473730089Z" level=warning msg="cleanup warnings time=\"2024-09-25T18:42:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 25 18:42:29 addons-587000 dockerd[1288]: time="2024-09-25T18:42:29.487379754Z" level=info msg="ignoring event" container=1d6821da439fca59eb37fa05179468858dc58d56d0d494111c0c695c7c7d59b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 18:42:29 addons-587000 dockerd[1294]: time="2024-09-25T18:42:29.487478336Z" level=info msg="shim disconnected" id=1d6821da439fca59eb37fa05179468858dc58d56d0d494111c0c695c7c7d59b7 namespace=moby
	Sep 25 18:42:29 addons-587000 dockerd[1294]: time="2024-09-25T18:42:29.487509544Z" level=warning msg="cleaning up after shim disconnected" id=1d6821da439fca59eb37fa05179468858dc58d56d0d494111c0c695c7c7d59b7 namespace=moby
	Sep 25 18:42:29 addons-587000 dockerd[1294]: time="2024-09-25T18:42:29.487513919Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 18:42:29 addons-587000 dockerd[1288]: time="2024-09-25T18:42:29.573264489Z" level=info msg="ignoring event" container=15aef4cf389c245e0a79a35ad44f2a2b80da22881ee247766baab1373da3f370 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 18:42:29 addons-587000 dockerd[1294]: time="2024-09-25T18:42:29.574759262Z" level=info msg="shim disconnected" id=15aef4cf389c245e0a79a35ad44f2a2b80da22881ee247766baab1373da3f370 namespace=moby
	Sep 25 18:42:29 addons-587000 dockerd[1294]: time="2024-09-25T18:42:29.574795303Z" level=warning msg="cleaning up after shim disconnected" id=15aef4cf389c245e0a79a35ad44f2a2b80da22881ee247766baab1373da3f370 namespace=moby
	Sep 25 18:42:29 addons-587000 dockerd[1294]: time="2024-09-25T18:42:29.574799595Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 18:42:29 addons-587000 dockerd[1294]: time="2024-09-25T18:42:29.593065328Z" level=info msg="shim disconnected" id=4cb7088092b9b4f210a492ad5e068807f8f17d5f7825eaba187d164f126e630a namespace=moby
	Sep 25 18:42:29 addons-587000 dockerd[1288]: time="2024-09-25T18:42:29.593154994Z" level=info msg="ignoring event" container=4cb7088092b9b4f210a492ad5e068807f8f17d5f7825eaba187d164f126e630a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 18:42:29 addons-587000 dockerd[1294]: time="2024-09-25T18:42:29.593267701Z" level=warning msg="cleaning up after shim disconnected" id=4cb7088092b9b4f210a492ad5e068807f8f17d5f7825eaba187d164f126e630a namespace=moby
	Sep 25 18:42:29 addons-587000 dockerd[1294]: time="2024-09-25T18:42:29.593288326Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	2213a6e03f9e0       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                                              27 seconds ago      Exited              busybox                                  0                   52e981c4627ff       test-local-path
	594fcbb69d302       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            36 seconds ago      Exited              gadget                                   7                   3469bad8fc1bb       gadget-5tkl8
	f248bc061b921       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   1f1b76870a6c2       gcp-auth-89d5ffd79-5qrrd
	3304bfc7439c9       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce                             11 minutes ago      Running             controller                               0                   123b71f1df7b9       ingress-nginx-controller-bc57996ff-srfvf
	9fb1f960b06b9       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          11 minutes ago      Running             csi-snapshotter                          0                   d1d2fd98052b7       csi-hostpathplugin-bt2vs
	85ad188d8f1ba       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          11 minutes ago      Running             csi-provisioner                          0                   d1d2fd98052b7       csi-hostpathplugin-bt2vs
	0ff15031f4c49       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            11 minutes ago      Running             liveness-probe                           0                   d1d2fd98052b7       csi-hostpathplugin-bt2vs
	3aa96fa176ca5       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           11 minutes ago      Running             hostpath                                 0                   d1d2fd98052b7       csi-hostpathplugin-bt2vs
	d9cdf4d98ba52       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                11 minutes ago      Running             node-driver-registrar                    0                   d1d2fd98052b7       csi-hostpathplugin-bt2vs
	5b1e0f4243177       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              11 minutes ago      Running             csi-resizer                              0                   583aae3112f96       csi-hostpath-resizer-0
	b92a3b1c0231c       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   11 minutes ago      Running             csi-external-health-monitor-controller   0                   d1d2fd98052b7       csi-hostpathplugin-bt2vs
	0d8013b583fb7       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             11 minutes ago      Running             csi-attacher                             0                   3d6fd00b946d9       csi-hostpath-attacher-0
	218b697c73419       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   11 minutes ago      Exited              patch                                    0                   25847f2c97cc3       ingress-nginx-admission-patch-jl4xd
	a77634b11220e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   11 minutes ago      Exited              create                                   0                   68e291cdef376       ingress-nginx-admission-create-7njjq
	1d6821da439fc       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              11 minutes ago      Exited              registry-proxy                           0                   4cb7088092b9b       registry-proxy-6zqqk
	ad5bb0b6e01e6       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             11 minutes ago      Exited              registry                                 0                   15aef4cf389c2       registry-66c9cd494c-j9gg5
	7a452705ef925       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       11 minutes ago      Running             local-path-provisioner                   0                   31901a82e1f33       local-path-provisioner-86d989889c-clfk7
	e6f7d05a6cd1c       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        11 minutes ago      Running             metrics-server                           0                   f9c728a935963       metrics-server-84c5f94fbc-wbcvk
	088f5e381eee6       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e                               11 minutes ago      Running             cloud-spanner-emulator                   0                   94ea50d0f3186       cloud-spanner-emulator-5b584cc74-psdp2
	7b81a99af945e       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             12 minutes ago      Running             minikube-ingress-dns                     0                   eec2ff908954e       kube-ingress-dns-minikube
	11aa19506f701       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago      Running             volume-snapshot-controller               0                   d55bb4fb4ab0f       snapshot-controller-56fcc65765-t7fvh
	7ed072ba41dc4       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago      Running             volume-snapshot-controller               0                   793f308bf2ad9       snapshot-controller-56fcc65765-c2dls
	b4054fba7bc80       ba04bb24b9575                                                                                                                                12 minutes ago      Running             storage-provisioner                      0                   325e3c5d5457f       storage-provisioner
	cac6e32bef453       2f6c962e7b831                                                                                                                                12 minutes ago      Running             coredns                                  0                   219bea79f8589       coredns-7c65d6cfc9-8l5nn
	2affbcdd555b0       24a140c548c07                                                                                                                                12 minutes ago      Running             kube-proxy                               0                   be5dce0b10a3c       kube-proxy-xc7t5
	b13d61f2411cb       7f8aa378bb47d                                                                                                                                12 minutes ago      Running             kube-scheduler                           0                   0a9444b7cd353       kube-scheduler-addons-587000
	3b0d1bea29580       27e3830e14027                                                                                                                                12 minutes ago      Running             etcd                                     0                   dd578b7a4a3b3       etcd-addons-587000
	5f27663553d3d       d3f53a98c0a9d                                                                                                                                12 minutes ago      Running             kube-apiserver                           0                   6e0e9cd7ba1dd       kube-apiserver-addons-587000
	c048ab9f772bf       279f381cb3736                                                                                                                                12 minutes ago      Running             kube-controller-manager                  0                   e7509a710bdc7       kube-controller-manager-addons-587000
	
	
	==> controller_ingress [3304bfc7439c] <==
	W0925 18:31:17.316333       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0925 18:31:17.316417       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0925 18:31:17.319311       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0925 18:31:17.354906       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0925 18:31:17.360643       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0925 18:31:17.364725       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0925 18:31:17.369107       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"9b81134a-0e91-471f-badc-00dd29e79035", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0925 18:31:17.370380       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"24c78938-458c-4765-aa00-f1b910b6cdb8", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0925 18:31:17.370455       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"ca63b6a1-da8c-4289-9a5f-98e14a1062bd", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0925 18:31:18.566606       7 nginx.go:317] "Starting NGINX process"
	I0925 18:31:18.566743       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0925 18:31:18.566953       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0925 18:31:18.567242       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0925 18:31:18.574422       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0925 18:31:18.574584       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-srfvf"
	I0925 18:31:18.580527       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-srfvf" node="addons-587000"
	I0925 18:31:18.603412       7 controller.go:213] "Backend successfully reloaded"
	I0925 18:31:18.603484       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0925 18:31:18.603510       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-srfvf", UID:"56427335-df5d-45d9-a50c-d2df47dc54ce", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [cac6e32bef45] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.11:48558 - 8364 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000101709s
	[INFO] 10.244.0.11:48558 - 12970 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000227792s
	[INFO] 10.244.0.11:54424 - 52524 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000035458s
	[INFO] 10.244.0.11:54424 - 45099 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000039542s
	[INFO] 10.244.0.11:42084 - 53550 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000026917s
	[INFO] 10.244.0.11:42084 - 302 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043375s
	[INFO] 10.244.0.11:60868 - 16800 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000040458s
	[INFO] 10.244.0.11:60868 - 43936 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000026709s
	[INFO] 10.244.0.11:46800 - 55891 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000027541s
	[INFO] 10.244.0.11:46800 - 20817 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000011667s
	[INFO] 10.244.0.11:39643 - 18079 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000037417s
	[INFO] 10.244.0.11:39643 - 18328 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00001625s
	[INFO] 10.244.0.11:40250 - 19718 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000015667s
	[INFO] 10.244.0.11:40250 - 13317 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00002825s
	[INFO] 10.244.0.11:48506 - 57558 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000015s
	[INFO] 10.244.0.11:48506 - 4310 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000015542s
	[INFO] 10.244.0.24:46039 - 46694 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002383412s
	[INFO] 10.244.0.24:60165 - 55787 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002432328s
	[INFO] 10.244.0.24:44222 - 4332 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000194541s
	[INFO] 10.244.0.24:58568 - 198 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000111333s
	[INFO] 10.244.0.24:46847 - 40350 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0000355s
	[INFO] 10.244.0.24:51778 - 50671 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000028375s
	[INFO] 10.244.0.24:58567 - 61006 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.004413657s
	[INFO] 10.244.0.24:42339 - 8090 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004547574s
	
	
	==> describe nodes <==
	Name:               addons-587000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-587000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cb9e6220ecbd737c1d09ad9630c6f144f437664a
	                    minikube.k8s.io/name=addons-587000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_25T11_29_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-587000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-587000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 25 Sep 2024 18:29:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-587000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 25 Sep 2024 18:42:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 25 Sep 2024 18:41:31 +0000   Wed, 25 Sep 2024 18:29:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 25 Sep 2024 18:41:31 +0000   Wed, 25 Sep 2024 18:29:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 25 Sep 2024 18:41:31 +0000   Wed, 25 Sep 2024 18:29:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 25 Sep 2024 18:41:31 +0000   Wed, 25 Sep 2024 18:29:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-587000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f3c46fb70534ac1a5b48d2bb8e3c216
	  System UUID:                3f3c46fb70534ac1a5b48d2bb8e3c216
	  Boot ID:                    888016b8-8c58-42bb-9199-208693f07c42
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  default                     cloud-spanner-emulator-5b584cc74-psdp2      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-5tkl8                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-5qrrd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-srfvf    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-8l5nn                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-bt2vs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-587000                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-587000                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-587000       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-xc7t5                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-587000                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-wbcvk             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-c2dls        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-t7fvh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-clfk7     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-587000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-587000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-587000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m   kubelet          Node addons-587000 status is now: NodeReady
	  Normal  RegisteredNode           12m   node-controller  Node addons-587000 event: Registered Node addons-587000 in Controller
	
	
	==> dmesg <==
	[  +0.053301] kauditd_printk_skb: 86 callbacks suppressed
	[  +5.115844] kauditd_printk_skb: 240 callbacks suppressed
	[  +6.530727] kauditd_printk_skb: 92 callbacks suppressed
	[ +22.805456] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.208748] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.136658] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.971930] kauditd_printk_skb: 13 callbacks suppressed
	[Sep25 18:31] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.279280] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.777296] kauditd_printk_skb: 32 callbacks suppressed
	[ +12.212915] kauditd_printk_skb: 18 callbacks suppressed
	[Sep25 18:32] kauditd_printk_skb: 2 callbacks suppressed
	[ +19.165297] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.528459] kauditd_printk_skb: 14 callbacks suppressed
	[Sep25 18:33] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.899111] kauditd_printk_skb: 20 callbacks suppressed
	[ +19.420231] kauditd_printk_skb: 21 callbacks suppressed
	[Sep25 18:36] kauditd_printk_skb: 2 callbacks suppressed
	[Sep25 18:41] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.761683] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.490566] kauditd_printk_skb: 11 callbacks suppressed
	[ +11.314251] kauditd_printk_skb: 2 callbacks suppressed
	[  +9.374630] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.900266] kauditd_printk_skb: 28 callbacks suppressed
	[Sep25 18:42] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [3b0d1bea2958] <==
	{"level":"info","ts":"2024-09-25T18:29:52.999649Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2024-09-25T18:29:52.999655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-25T18:29:53.000421Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-587000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-25T18:29:53.000440Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-25T18:29:53.000589Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-25T18:29:53.000766Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-25T18:29:53.000906Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-25T18:29:53.000915Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-25T18:29:53.000927Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-25T18:29:53.000949Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-25T18:29:53.000959Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-25T18:29:53.001321Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-25T18:29:53.001336Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-25T18:29:53.001861Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-25T18:29:53.001882Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-09-25T18:30:15.983166Z","caller":"traceutil/trace.go:171","msg":"trace[1758270467] linearizableReadLoop","detail":"{readStateIndex:983; appliedIndex:982; }","duration":"278.728433ms","start":"2024-09-25T18:30:15.704427Z","end":"2024-09-25T18:30:15.983156Z","steps":["trace[1758270467] 'read index received'  (duration: 278.637017ms)","trace[1758270467] 'applied index is now lower than readState.Index'  (duration: 91.208µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-25T18:30:15.983216Z","caller":"traceutil/trace.go:171","msg":"trace[1789306028] transaction","detail":"{read_only:false; response_revision:962; number_of_response:1; }","duration":"294.534809ms","start":"2024-09-25T18:30:15.688678Z","end":"2024-09-25T18:30:15.983213Z","steps":["trace[1789306028] 'process raft request'  (duration: 294.398226ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-25T18:30:15.983348Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"278.913975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-25T18:30:15.983359Z","caller":"traceutil/trace.go:171","msg":"trace[2077895468] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:962; }","duration":"278.933851ms","start":"2024-09-25T18:30:15.704422Z","end":"2024-09-25T18:30:15.983356Z","steps":["trace[2077895468] 'agreement among raft nodes before linearized reading'  (duration: 278.895767ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-25T18:30:15.983396Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.449965ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-25T18:30:15.983402Z","caller":"traceutil/trace.go:171","msg":"trace[1707537621] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:962; }","duration":"102.456923ms","start":"2024-09-25T18:30:15.880943Z","end":"2024-09-25T18:30:15.983400Z","steps":["trace[1707537621] 'agreement among raft nodes before linearized reading'  (duration: 102.44684ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-25T18:31:17.138189Z","caller":"traceutil/trace.go:171","msg":"trace[1546522271] transaction","detail":"{read_only:false; response_revision:1208; number_of_response:1; }","duration":"142.293671ms","start":"2024-09-25T18:31:16.995880Z","end":"2024-09-25T18:31:17.138174Z","steps":["trace[1546522271] 'process raft request'  (duration: 142.220755ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-25T18:39:53.486798Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1852}
	{"level":"info","ts":"2024-09-25T18:39:53.574252Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1852,"took":"85.983865ms","hash":2650893062,"current-db-size-bytes":8986624,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":4849664,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2024-09-25T18:39:53.574732Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2650893062,"revision":1852,"compact-revision":-1}
	
	
	==> gcp-auth [f248bc061b92] <==
	2024/09/25 18:32:37 GCP Auth Webhook started!
	2024/09/25 18:32:54 Ready to marshal response ...
	2024/09/25 18:32:54 Ready to write response ...
	2024/09/25 18:32:54 Ready to marshal response ...
	2024/09/25 18:32:54 Ready to write response ...
	2024/09/25 18:33:17 Ready to marshal response ...
	2024/09/25 18:33:17 Ready to write response ...
	2024/09/25 18:33:17 Ready to marshal response ...
	2024/09/25 18:33:17 Ready to write response ...
	2024/09/25 18:33:17 Ready to marshal response ...
	2024/09/25 18:33:17 Ready to write response ...
	2024/09/25 18:41:19 Ready to marshal response ...
	2024/09/25 18:41:19 Ready to write response ...
	2024/09/25 18:41:19 Ready to marshal response ...
	2024/09/25 18:41:19 Ready to write response ...
	2024/09/25 18:41:19 Ready to marshal response ...
	2024/09/25 18:41:19 Ready to write response ...
	2024/09/25 18:41:29 Ready to marshal response ...
	2024/09/25 18:41:29 Ready to write response ...
	2024/09/25 18:41:55 Ready to marshal response ...
	2024/09/25 18:41:55 Ready to write response ...
	2024/09/25 18:41:55 Ready to marshal response ...
	2024/09/25 18:41:55 Ready to write response ...
	2024/09/25 18:42:05 Ready to marshal response ...
	2024/09/25 18:42:05 Ready to write response ...
	
	
	==> kernel <==
	 18:42:30 up 12 min,  0 users,  load average: 1.02, 0.92, 0.60
	Linux addons-587000 5.10.207 #1 SMP PREEMPT Mon Sep 23 18:07:35 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5f27663553d3] <==
	E0925 18:31:30.663317       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.188.209:443: connect: connection refused" logger="UnhandledError"
	W0925 18:32:11.721888       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.188.209:443: connect: connection refused
	E0925 18:32:11.721982       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.188.209:443: connect: connection refused" logger="UnhandledError"
	W0925 18:32:11.728333       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.188.209:443: connect: connection refused
	E0925 18:32:11.728648       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.188.209:443: connect: connection refused" logger="UnhandledError"
	I0925 18:32:54.107595       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0925 18:32:54.119820       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0925 18:33:07.550639       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0925 18:33:07.550684       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0925 18:33:07.703200       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0925 18:33:07.724995       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0925 18:33:07.744618       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0925 18:33:07.910874       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0925 18:33:07.910901       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0925 18:33:07.928944       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0925 18:33:08.072782       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0925 18:33:08.686168       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0925 18:33:08.784212       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0925 18:33:08.911565       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0925 18:33:08.938626       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0925 18:33:08.985539       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0925 18:33:09.073413       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0925 18:33:09.195094       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0925 18:41:19.338556       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.32.3"}
	E0925 18:42:21.636608       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [c048ab9f772b] <==
	W0925 18:41:26.579559       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:41:26.579657       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0925 18:41:31.877752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-587000"
	W0925 18:41:31.935019       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:41:31.935158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0925 18:41:32.605112       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="15.291µs"
	W0925 18:41:41.087325       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:41:41.087446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0925 18:41:42.677446       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	W0925 18:41:43.134809       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:41:43.134947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0925 18:41:43.918531       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="2.167µs"
	I0925 18:41:54.003299       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0925 18:42:02.379362       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:42:02.379409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0925 18:42:05.853343       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="2.709µs"
	W0925 18:42:11.197129       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:42:11.197166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:42:13.663203       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:42:13.663291       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:42:15.890208       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:42:15.890313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:42:26.438106       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:42:26.438207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0925 18:42:29.429627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="2.75µs"
	
	
	==> kube-proxy [2affbcdd555b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0925 18:30:01.973975       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0925 18:30:02.441684       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0925 18:30:02.441732       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0925 18:30:02.457837       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0925 18:30:02.457862       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0925 18:30:02.457878       1 server_linux.go:169] "Using iptables Proxier"
	I0925 18:30:02.464555       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0925 18:30:02.464710       1 server.go:483] "Version info" version="v1.31.1"
	I0925 18:30:02.464717       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 18:30:02.466635       1 config.go:199] "Starting service config controller"
	I0925 18:30:02.466804       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0925 18:30:02.467064       1 config.go:105] "Starting endpoint slice config controller"
	I0925 18:30:02.467075       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0925 18:30:02.467462       1 config.go:328] "Starting node config controller"
	I0925 18:30:02.467474       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0925 18:30:02.568137       1 shared_informer.go:320] Caches are synced for node config
	I0925 18:30:02.568198       1 shared_informer.go:320] Caches are synced for service config
	I0925 18:30:02.568218       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b13d61f2411c] <==
	W0925 18:29:53.523442       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 18:29:53.523710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0925 18:29:53.523542       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 18:29:53.523739       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0925 18:29:53.523453       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0925 18:29:53.523747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0925 18:29:53.523465       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 18:29:53.523756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0925 18:29:53.523559       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0925 18:29:53.523764       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0925 18:29:53.523571       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0925 18:29:53.523773       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0925 18:29:53.523312       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0925 18:29:53.523781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0925 18:29:54.388475       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0925 18:29:54.388799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0925 18:29:54.397814       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0925 18:29:54.397887       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0925 18:29:54.433971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 18:29:54.434047       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0925 18:29:54.527451       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 18:29:54.527485       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0925 18:29:54.567849       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0925 18:29:54.567871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0925 18:29:54.921827       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 25 18:42:27 addons-587000 kubelet[2065]: I0925 18:42:27.995068    2065 scope.go:117] "RemoveContainer" containerID="594fcbb69d302e0fc96b95a80c449ea1abcbd9fabde74105a7cc503dda1151aa"
	Sep 25 18:42:27 addons-587000 kubelet[2065]: E0925 18:42:27.996356    2065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-5tkl8_gadget(605ec4b1-4dd8-431d-a80f-f225a6b32ca9)\"" pod="gadget/gadget-5tkl8" podUID="605ec4b1-4dd8-431d-a80f-f225a6b32ca9"
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.329721    2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5a7b8645-3dcb-4b57-867e-2840316344dc-gcp-creds\") pod \"5a7b8645-3dcb-4b57-867e-2840316344dc\" (UID: \"5a7b8645-3dcb-4b57-867e-2840316344dc\") "
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.329774    2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85bqm\" (UniqueName: \"kubernetes.io/projected/5a7b8645-3dcb-4b57-867e-2840316344dc-kube-api-access-85bqm\") pod \"5a7b8645-3dcb-4b57-867e-2840316344dc\" (UID: \"5a7b8645-3dcb-4b57-867e-2840316344dc\") "
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.330449    2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7b8645-3dcb-4b57-867e-2840316344dc-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "5a7b8645-3dcb-4b57-867e-2840316344dc" (UID: "5a7b8645-3dcb-4b57-867e-2840316344dc"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.336546    2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a7b8645-3dcb-4b57-867e-2840316344dc-kube-api-access-85bqm" (OuterVolumeSpecName: "kube-api-access-85bqm") pod "5a7b8645-3dcb-4b57-867e-2840316344dc" (UID: "5a7b8645-3dcb-4b57-867e-2840316344dc"). InnerVolumeSpecName "kube-api-access-85bqm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.429959    2065 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5a7b8645-3dcb-4b57-867e-2840316344dc-gcp-creds\") on node \"addons-587000\" DevicePath \"\""
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.429974    2065 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-85bqm\" (UniqueName: \"kubernetes.io/projected/5a7b8645-3dcb-4b57-867e-2840316344dc-kube-api-access-85bqm\") on node \"addons-587000\" DevicePath \"\""
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.732425    2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbs8c\" (UniqueName: \"kubernetes.io/projected/cd05e219-d06b-4852-a6f2-4a3231bd632b-kube-api-access-wbs8c\") pod \"cd05e219-d06b-4852-a6f2-4a3231bd632b\" (UID: \"cd05e219-d06b-4852-a6f2-4a3231bd632b\") "
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.732454    2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrbjh\" (UniqueName: \"kubernetes.io/projected/0581ec26-3502-4a43-9102-5a41bcd2e80c-kube-api-access-mrbjh\") pod \"0581ec26-3502-4a43-9102-5a41bcd2e80c\" (UID: \"0581ec26-3502-4a43-9102-5a41bcd2e80c\") "
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.733404    2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0581ec26-3502-4a43-9102-5a41bcd2e80c-kube-api-access-mrbjh" (OuterVolumeSpecName: "kube-api-access-mrbjh") pod "0581ec26-3502-4a43-9102-5a41bcd2e80c" (UID: "0581ec26-3502-4a43-9102-5a41bcd2e80c"). InnerVolumeSpecName "kube-api-access-mrbjh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.733813    2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd05e219-d06b-4852-a6f2-4a3231bd632b-kube-api-access-wbs8c" (OuterVolumeSpecName: "kube-api-access-wbs8c") pod "cd05e219-d06b-4852-a6f2-4a3231bd632b" (UID: "cd05e219-d06b-4852-a6f2-4a3231bd632b"). InnerVolumeSpecName "kube-api-access-wbs8c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.833190    2065 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wbs8c\" (UniqueName: \"kubernetes.io/projected/cd05e219-d06b-4852-a6f2-4a3231bd632b-kube-api-access-wbs8c\") on node \"addons-587000\" DevicePath \"\""
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.833209    2065 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mrbjh\" (UniqueName: \"kubernetes.io/projected/0581ec26-3502-4a43-9102-5a41bcd2e80c-kube-api-access-mrbjh\") on node \"addons-587000\" DevicePath \"\""
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.864569    2065 scope.go:117] "RemoveContainer" containerID="ad5bb0b6e01e62768d533ce9886f1d50dffff00bd9076a68b3213a176373f7b7"
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.895201    2065 scope.go:117] "RemoveContainer" containerID="ad5bb0b6e01e62768d533ce9886f1d50dffff00bd9076a68b3213a176373f7b7"
	Sep 25 18:42:29 addons-587000 kubelet[2065]: E0925 18:42:29.895634    2065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ad5bb0b6e01e62768d533ce9886f1d50dffff00bd9076a68b3213a176373f7b7" containerID="ad5bb0b6e01e62768d533ce9886f1d50dffff00bd9076a68b3213a176373f7b7"
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.895654    2065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ad5bb0b6e01e62768d533ce9886f1d50dffff00bd9076a68b3213a176373f7b7"} err="failed to get container status \"ad5bb0b6e01e62768d533ce9886f1d50dffff00bd9076a68b3213a176373f7b7\": rpc error: code = Unknown desc = Error response from daemon: No such container: ad5bb0b6e01e62768d533ce9886f1d50dffff00bd9076a68b3213a176373f7b7"
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.895679    2065 scope.go:117] "RemoveContainer" containerID="1d6821da439fca59eb37fa05179468858dc58d56d0d494111c0c695c7c7d59b7"
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.903259    2065 scope.go:117] "RemoveContainer" containerID="1d6821da439fca59eb37fa05179468858dc58d56d0d494111c0c695c7c7d59b7"
	Sep 25 18:42:29 addons-587000 kubelet[2065]: E0925 18:42:29.903615    2065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 1d6821da439fca59eb37fa05179468858dc58d56d0d494111c0c695c7c7d59b7" containerID="1d6821da439fca59eb37fa05179468858dc58d56d0d494111c0c695c7c7d59b7"
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.903632    2065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"1d6821da439fca59eb37fa05179468858dc58d56d0d494111c0c695c7c7d59b7"} err="failed to get container status \"1d6821da439fca59eb37fa05179468858dc58d56d0d494111c0c695c7c7d59b7\": rpc error: code = Unknown desc = Error response from daemon: No such container: 1d6821da439fca59eb37fa05179468858dc58d56d0d494111c0c695c7c7d59b7"
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.998612    2065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0581ec26-3502-4a43-9102-5a41bcd2e80c" path="/var/lib/kubelet/pods/0581ec26-3502-4a43-9102-5a41bcd2e80c/volumes"
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.998773    2065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a7b8645-3dcb-4b57-867e-2840316344dc" path="/var/lib/kubelet/pods/5a7b8645-3dcb-4b57-867e-2840316344dc/volumes"
	Sep 25 18:42:29 addons-587000 kubelet[2065]: I0925 18:42:29.998860    2065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd05e219-d06b-4852-a6f2-4a3231bd632b" path="/var/lib/kubelet/pods/cd05e219-d06b-4852-a6f2-4a3231bd632b/volumes"
	
	
	==> storage-provisioner [b4054fba7bc8] <==
	I0925 18:30:04.157270       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0925 18:30:04.166452       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0925 18:30:04.166492       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0925 18:30:04.172369       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0925 18:30:04.172434       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-587000_b99b3419-1712-40df-99c9-2de8c632ba73!
	I0925 18:30:04.172908       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b53e6a91-0d1d-46aa-85e4-15eb8a308ec8", APIVersion:"v1", ResourceVersion:"567", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-587000_b99b3419-1712-40df-99c9-2de8c632ba73 became leader
	I0925 18:30:04.272838       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-587000_b99b3419-1712-40df-99c9-2de8c632ba73!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-587000 -n addons-587000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-587000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-7njjq ingress-nginx-admission-patch-jl4xd
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-587000 describe pod busybox ingress-nginx-admission-create-7njjq ingress-nginx-admission-patch-jl4xd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-587000 describe pod busybox ingress-nginx-admission-create-7njjq ingress-nginx-admission-patch-jl4xd: exit status 1 (40.446375ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-587000/192.168.105.2
	Start Time:       Wed, 25 Sep 2024 11:33:17 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hssd6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hssd6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason          Age                    From               Message
	  ----     ------          ----                   ----               -------
	  Normal   Scheduled       9m13s                  default-scheduler  Successfully assigned default/busybox to addons-587000
	  Normal   SandboxChanged  9m11s                  kubelet            Pod sandbox changed, it will be killed and re-created.
	  Normal   Pulling         7m51s (x4 over 9m12s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed          7m50s (x4 over 9m12s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed          7m50s (x4 over 9m12s)  kubelet            Error: ErrImagePull
	  Warning  Failed          7m36s (x6 over 9m11s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff         4m7s (x21 over 9m11s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7njjq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-jl4xd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-587000 describe pod busybox ingress-nginx-admission-create-7njjq ingress-nginx-admission-patch-jl4xd: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.34s)

                                                
                                    
x
+
TestCertOptions (10.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-322000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-322000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.942144291s)

                                                
                                                
-- stdout --
	* [cert-options-322000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-322000" primary control-plane node in "cert-options-322000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-322000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-322000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-322000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-322000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.884708ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-322000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-322000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-322000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-322000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-322000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-322000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.880125ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-322000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-322000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-322000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-322000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-322000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-25 12:18:12.144794 -0700 PDT m=+2968.234966126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-322000 -n cert-options-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-322000 -n cert-options-322000: exit status 7 (29.832625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-322000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-322000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-322000
--- FAIL: TestCertOptions (10.21s)

                                                
                                    
x
+
TestCertExpiration (195.4s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-271000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-271000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.037652416s)

                                                
                                                
-- stdout --
	* [cert-expiration-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-271000" primary control-plane node in "cert-expiration-271000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-271000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-271000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-271000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-271000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.23285375s)

                                                
                                                
-- stdout --
	* [cert-expiration-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-271000" primary control-plane node in "cert-expiration-271000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-271000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-271000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-271000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-271000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-271000" primary control-plane node in "cert-expiration-271000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-271000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-271000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-271000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-25 12:21:12.218684 -0700 PDT m=+3148.312199334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-271000 -n cert-expiration-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-271000 -n cert-expiration-271000: exit status 7 (46.016125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-271000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-271000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-271000
--- FAIL: TestCertExpiration (195.40s)

                                                
                                    
x
+
TestDockerFlags (10.39s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-398000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-398000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.154684334s)

                                                
                                                
-- stdout --
	* [docker-flags-398000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-398000" primary control-plane node in "docker-flags-398000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-398000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:17:51.688583    4797 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:17:51.688716    4797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:17:51.688719    4797 out.go:358] Setting ErrFile to fd 2...
	I0925 12:17:51.688722    4797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:17:51.688858    4797 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:17:51.689936    4797 out.go:352] Setting JSON to false
	I0925 12:17:51.705877    4797 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4642,"bootTime":1727287229,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:17:51.705951    4797 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:17:51.712681    4797 out.go:177] * [docker-flags-398000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:17:51.720511    4797 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:17:51.720553    4797 notify.go:220] Checking for updates...
	I0925 12:17:51.727478    4797 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:17:51.730435    4797 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:17:51.733473    4797 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:17:51.736496    4797 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:17:51.739421    4797 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:17:51.742851    4797 config.go:182] Loaded profile config "force-systemd-flag-093000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:17:51.742915    4797 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:17:51.742960    4797 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:17:51.747480    4797 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:17:51.754425    4797 start.go:297] selected driver: qemu2
	I0925 12:17:51.754432    4797 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:17:51.754445    4797 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:17:51.756732    4797 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:17:51.760455    4797 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:17:51.763622    4797 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0925 12:17:51.763641    4797 cni.go:84] Creating CNI manager for ""
	I0925 12:17:51.763668    4797 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:17:51.763676    4797 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 12:17:51.763702    4797 start.go:340] cluster config:
	{Name:docker-flags-398000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-398000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:17:51.767355    4797 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:17:51.774488    4797 out.go:177] * Starting "docker-flags-398000" primary control-plane node in "docker-flags-398000" cluster
	I0925 12:17:51.778463    4797 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:17:51.778480    4797 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:17:51.778491    4797 cache.go:56] Caching tarball of preloaded images
	I0925 12:17:51.778562    4797 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:17:51.778576    4797 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:17:51.778629    4797 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/docker-flags-398000/config.json ...
	I0925 12:17:51.778647    4797 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/docker-flags-398000/config.json: {Name:mk4d373b894d93746f0f46a988b0fff48a1959e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:17:51.778867    4797 start.go:360] acquireMachinesLock for docker-flags-398000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:17:51.778912    4797 start.go:364] duration metric: took 37.083µs to acquireMachinesLock for "docker-flags-398000"
	I0925 12:17:51.778926    4797 start.go:93] Provisioning new machine with config: &{Name:docker-flags-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-398000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:17:51.778962    4797 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:17:51.787438    4797 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0925 12:17:51.805602    4797 start.go:159] libmachine.API.Create for "docker-flags-398000" (driver="qemu2")
	I0925 12:17:51.805638    4797 client.go:168] LocalClient.Create starting
	I0925 12:17:51.805709    4797 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:17:51.805740    4797 main.go:141] libmachine: Decoding PEM data...
	I0925 12:17:51.805751    4797 main.go:141] libmachine: Parsing certificate...
	I0925 12:17:51.805789    4797 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:17:51.805813    4797 main.go:141] libmachine: Decoding PEM data...
	I0925 12:17:51.805821    4797 main.go:141] libmachine: Parsing certificate...
	I0925 12:17:51.806175    4797 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:17:51.969429    4797 main.go:141] libmachine: Creating SSH key...
	I0925 12:17:52.279226    4797 main.go:141] libmachine: Creating Disk image...
	I0925 12:17:52.279236    4797 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:17:52.279469    4797 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/docker-flags-398000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/docker-flags-398000/disk.qcow2
	I0925 12:17:52.289409    4797 main.go:141] libmachine: STDOUT: 
	I0925 12:17:52.289480    4797 main.go:141] libmachine: STDERR: 
	I0925 12:17:52.289545    4797 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/docker-flags-398000/disk.qcow2 +20000M
	I0925 12:17:52.297560    4797 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:17:52.297575    4797 main.go:141] libmachine: STDERR: 
	I0925 12:17:52.297589    4797 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/docker-flags-398000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/docker-flags-398000/disk.qcow2
	I0925 12:17:52.297593    4797 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:17:52.297606    4797 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:17:52.297635    4797 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/docker-flags-398000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/docker-flags-398000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/docker-flags-398000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:a1:5d:64:ef:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/docker-flags-398000/disk.qcow2
	I0925 12:17:52.299307    4797 main.go:141] libmachine: STDOUT: 
	I0925 12:17:52.299322    4797 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:17:52.299341    4797 client.go:171] duration metric: took 493.706167ms to LocalClient.Create
	I0925 12:17:54.301470    4797 start.go:128] duration metric: took 2.522538791s to createHost
	I0925 12:17:54.301531    4797 start.go:83] releasing machines lock for "docker-flags-398000", held for 2.522645625s
	W0925 12:17:54.301625    4797 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:17:54.331682    4797 out.go:177] * Deleting "docker-flags-398000" in qemu2 ...
	W0925 12:17:54.358912    4797 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:17:54.358927    4797 start.go:729] Will try again in 5 seconds ...
	I0925 12:17:59.361050    4797 start.go:360] acquireMachinesLock for docker-flags-398000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:17:59.387244    4797 start.go:364] duration metric: took 26.0745ms to acquireMachinesLock for "docker-flags-398000"
	I0925 12:17:59.387406    4797 start.go:93] Provisioning new machine with config: &{Name:docker-flags-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-398000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:17:59.387764    4797 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:17:59.403427    4797 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0925 12:17:59.452493    4797 start.go:159] libmachine.API.Create for "docker-flags-398000" (driver="qemu2")
	I0925 12:17:59.452552    4797 client.go:168] LocalClient.Create starting
	I0925 12:17:59.452674    4797 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:17:59.452751    4797 main.go:141] libmachine: Decoding PEM data...
	I0925 12:17:59.452768    4797 main.go:141] libmachine: Parsing certificate...
	I0925 12:17:59.452837    4797 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:17:59.452883    4797 main.go:141] libmachine: Decoding PEM data...
	I0925 12:17:59.452894    4797 main.go:141] libmachine: Parsing certificate...
	I0925 12:17:59.453459    4797 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:17:59.631190    4797 main.go:141] libmachine: Creating SSH key...
	I0925 12:17:59.738446    4797 main.go:141] libmachine: Creating Disk image...
	I0925 12:17:59.738454    4797 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:17:59.738639    4797 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/docker-flags-398000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/docker-flags-398000/disk.qcow2
	I0925 12:17:59.747794    4797 main.go:141] libmachine: STDOUT: 
	I0925 12:17:59.747823    4797 main.go:141] libmachine: STDERR: 
	I0925 12:17:59.747882    4797 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/docker-flags-398000/disk.qcow2 +20000M
	I0925 12:17:59.755611    4797 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:17:59.755625    4797 main.go:141] libmachine: STDERR: 
	I0925 12:17:59.755642    4797 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/docker-flags-398000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/docker-flags-398000/disk.qcow2
	I0925 12:17:59.755648    4797 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:17:59.755657    4797 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:17:59.755692    4797 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/docker-flags-398000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/docker-flags-398000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/docker-flags-398000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:de:cd:73:78:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/docker-flags-398000/disk.qcow2
	I0925 12:17:59.757261    4797 main.go:141] libmachine: STDOUT: 
	I0925 12:17:59.757274    4797 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:17:59.757287    4797 client.go:171] duration metric: took 304.734417ms to LocalClient.Create
	I0925 12:18:01.759492    4797 start.go:128] duration metric: took 2.3717225s to createHost
	I0925 12:18:01.759586    4797 start.go:83] releasing machines lock for "docker-flags-398000", held for 2.372342625s
	W0925 12:18:01.759941    4797 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-398000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-398000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:18:01.780618    4797 out.go:201] 
	W0925 12:18:01.788442    4797 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:18:01.788470    4797 out.go:270] * 
	* 
	W0925 12:18:01.790814    4797 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:18:01.801415    4797 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-398000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-398000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-398000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (76.0895ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-398000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-398000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-398000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-398000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-398000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-398000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-398000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-398000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-398000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.783916ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-398000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-398000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-398000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-398000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-398000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-398000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-25 12:18:01.939327 -0700 PDT m=+2958.029310084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-398000 -n docker-flags-398000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-398000 -n docker-flags-398000: exit status 7 (29.73275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-398000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-398000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-398000
--- FAIL: TestDockerFlags (10.39s)

                                                
                                    
x
+
TestForceSystemdFlag (10.35s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-093000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-093000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.155627041s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-093000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-093000" primary control-plane node in "force-systemd-flag-093000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-093000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:17:46.628994    4776 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:17:46.629120    4776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:17:46.629123    4776 out.go:358] Setting ErrFile to fd 2...
	I0925 12:17:46.629130    4776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:17:46.629257    4776 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:17:46.630270    4776 out.go:352] Setting JSON to false
	I0925 12:17:46.645942    4776 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4637,"bootTime":1727287229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:17:46.646018    4776 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:17:46.653292    4776 out.go:177] * [force-systemd-flag-093000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:17:46.671280    4776 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:17:46.671307    4776 notify.go:220] Checking for updates...
	I0925 12:17:46.682222    4776 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:17:46.686294    4776 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:17:46.689204    4776 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:17:46.692278    4776 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:17:46.695258    4776 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:17:46.698512    4776 config.go:182] Loaded profile config "force-systemd-env-884000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:17:46.698587    4776 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:17:46.698639    4776 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:17:46.703229    4776 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:17:46.710198    4776 start.go:297] selected driver: qemu2
	I0925 12:17:46.710207    4776 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:17:46.710216    4776 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:17:46.712680    4776 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:17:46.716242    4776 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:17:46.719344    4776 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0925 12:17:46.719363    4776 cni.go:84] Creating CNI manager for ""
	I0925 12:17:46.719403    4776 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:17:46.719410    4776 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 12:17:46.719446    4776 start.go:340] cluster config:
	{Name:force-systemd-flag-093000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-093000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:17:46.723224    4776 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:17:46.730247    4776 out.go:177] * Starting "force-systemd-flag-093000" primary control-plane node in "force-systemd-flag-093000" cluster
	I0925 12:17:46.734093    4776 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:17:46.734110    4776 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:17:46.734122    4776 cache.go:56] Caching tarball of preloaded images
	I0925 12:17:46.734194    4776 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:17:46.734200    4776 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:17:46.734264    4776 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/force-systemd-flag-093000/config.json ...
	I0925 12:17:46.734276    4776 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/force-systemd-flag-093000/config.json: {Name:mkfa1a1a59c5fc7c1155d4212a01df01ecf0d170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:17:46.734512    4776 start.go:360] acquireMachinesLock for force-systemd-flag-093000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:17:46.734553    4776 start.go:364] duration metric: took 32.417µs to acquireMachinesLock for "force-systemd-flag-093000"
	I0925 12:17:46.734568    4776 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-093000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-093000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:17:46.734597    4776 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:17:46.739284    4776 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0925 12:17:46.759266    4776 start.go:159] libmachine.API.Create for "force-systemd-flag-093000" (driver="qemu2")
	I0925 12:17:46.759295    4776 client.go:168] LocalClient.Create starting
	I0925 12:17:46.759368    4776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:17:46.759403    4776 main.go:141] libmachine: Decoding PEM data...
	I0925 12:17:46.759414    4776 main.go:141] libmachine: Parsing certificate...
	I0925 12:17:46.759457    4776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:17:46.759483    4776 main.go:141] libmachine: Decoding PEM data...
	I0925 12:17:46.759491    4776 main.go:141] libmachine: Parsing certificate...
	I0925 12:17:46.759878    4776 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:17:46.925440    4776 main.go:141] libmachine: Creating SSH key...
	I0925 12:17:47.016567    4776 main.go:141] libmachine: Creating Disk image...
	I0925 12:17:47.016573    4776 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:17:47.016755    4776 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-flag-093000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-flag-093000/disk.qcow2
	I0925 12:17:47.026079    4776 main.go:141] libmachine: STDOUT: 
	I0925 12:17:47.026099    4776 main.go:141] libmachine: STDERR: 
	I0925 12:17:47.026156    4776 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-flag-093000/disk.qcow2 +20000M
	I0925 12:17:47.034103    4776 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:17:47.034117    4776 main.go:141] libmachine: STDERR: 
	I0925 12:17:47.034130    4776 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-flag-093000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-flag-093000/disk.qcow2
	I0925 12:17:47.034137    4776 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:17:47.034152    4776 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:17:47.034176    4776 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-flag-093000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-flag-093000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-flag-093000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:29:77:a2:da:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-flag-093000/disk.qcow2
	I0925 12:17:47.035814    4776 main.go:141] libmachine: STDOUT: 
	I0925 12:17:47.035829    4776 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:17:47.035851    4776 client.go:171] duration metric: took 276.553167ms to LocalClient.Create
	I0925 12:17:49.038015    4776 start.go:128] duration metric: took 2.303436375s to createHost
	I0925 12:17:49.038078    4776 start.go:83] releasing machines lock for "force-systemd-flag-093000", held for 2.303557583s
	W0925 12:17:49.038143    4776 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:17:49.069387    4776 out.go:177] * Deleting "force-systemd-flag-093000" in qemu2 ...
	W0925 12:17:49.095402    4776 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:17:49.095416    4776 start.go:729] Will try again in 5 seconds ...
	I0925 12:17:54.097545    4776 start.go:360] acquireMachinesLock for force-systemd-flag-093000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:17:54.301686    4776 start.go:364] duration metric: took 204.01725ms to acquireMachinesLock for "force-systemd-flag-093000"
	I0925 12:17:54.301825    4776 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-093000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-093000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:17:54.302063    4776 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:17:54.317630    4776 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0925 12:17:54.366931    4776 start.go:159] libmachine.API.Create for "force-systemd-flag-093000" (driver="qemu2")
	I0925 12:17:54.366986    4776 client.go:168] LocalClient.Create starting
	I0925 12:17:54.367102    4776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:17:54.367175    4776 main.go:141] libmachine: Decoding PEM data...
	I0925 12:17:54.367196    4776 main.go:141] libmachine: Parsing certificate...
	I0925 12:17:54.367250    4776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:17:54.367295    4776 main.go:141] libmachine: Decoding PEM data...
	I0925 12:17:54.367312    4776 main.go:141] libmachine: Parsing certificate...
	I0925 12:17:54.369199    4776 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:17:54.556887    4776 main.go:141] libmachine: Creating SSH key...
	I0925 12:17:54.682234    4776 main.go:141] libmachine: Creating Disk image...
	I0925 12:17:54.682239    4776 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:17:54.682433    4776 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-flag-093000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-flag-093000/disk.qcow2
	I0925 12:17:54.691803    4776 main.go:141] libmachine: STDOUT: 
	I0925 12:17:54.691832    4776 main.go:141] libmachine: STDERR: 
	I0925 12:17:54.691899    4776 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-flag-093000/disk.qcow2 +20000M
	I0925 12:17:54.699834    4776 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:17:54.699849    4776 main.go:141] libmachine: STDERR: 
	I0925 12:17:54.699863    4776 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-flag-093000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-flag-093000/disk.qcow2
	I0925 12:17:54.699866    4776 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:17:54.699877    4776 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:17:54.699909    4776 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-flag-093000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-flag-093000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-flag-093000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:1d:e3:f0:11:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-flag-093000/disk.qcow2
	I0925 12:17:54.701471    4776 main.go:141] libmachine: STDOUT: 
	I0925 12:17:54.701489    4776 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:17:54.701501    4776 client.go:171] duration metric: took 334.516ms to LocalClient.Create
	I0925 12:17:56.703247    4776 start.go:128] duration metric: took 2.401167333s to createHost
	I0925 12:17:56.703315    4776 start.go:83] releasing machines lock for "force-systemd-flag-093000", held for 2.40162575s
	W0925 12:17:56.703636    4776 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-093000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-093000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:17:56.726399    4776 out.go:201] 
	W0925 12:17:56.730432    4776 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:17:56.730455    4776 out.go:270] * 
	* 
	W0925 12:17:56.732901    4776 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:17:56.743344    4776 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-093000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-093000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-093000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (74.371834ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-093000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-093000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-093000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-25 12:17:56.834371 -0700 PDT m=+2952.924259418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-093000 -n force-systemd-flag-093000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-093000 -n force-systemd-flag-093000: exit status 7 (35.433417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-093000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-093000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-093000
--- FAIL: TestForceSystemdFlag (10.35s)

                                                
                                    
x
+
TestForceSystemdEnv (11.04s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-884000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I0925 12:17:41.208337    1934 install.go:79] stdout: 
W0925 12:17:41.208481    1934 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2044776198/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2044776198/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0925 12:17:41.208503    1934 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2044776198/001/docker-machine-driver-hyperkit]
I0925 12:17:41.218540    1934 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2044776198/001/docker-machine-driver-hyperkit]
I0925 12:17:41.227191    1934 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2044776198/001/docker-machine-driver-hyperkit]
I0925 12:17:41.235600    1934 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2044776198/001/docker-machine-driver-hyperkit]
I0925 12:17:41.251287    1934 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0925 12:17:41.251395    1934 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I0925 12:17:43.050998    1934 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W0925 12:17:43.051018    1934 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W0925 12:17:43.051070    1934 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0925 12:17:43.051101    1934 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2044776198/002/docker-machine-driver-hyperkit
I0925 12:17:43.472272    1934 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2044776198/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10675ad40 0x10675ad40 0x10675ad40 0x10675ad40 0x10675ad40 0x10675ad40 0x10675ad40] Decompressors:map[bz2:0x140005d95d0 gz:0x140005d95d8 tar:0x140005d9580 tar.bz2:0x140005d9590 tar.gz:0x140005d95a0 tar.xz:0x140005d95b0 tar.zst:0x140005d95c0 tbz2:0x140005d9590 tgz:0x140005d95a0 txz:0x140005d95b0 tzst:0x140005d95c0 xz:0x140005d95e0 zip:0x140005d95f0 zst:0x140005d95e8] Getters:map[file:0x140013efcd0 http:0x1400004c5f0 https:0x1400004c640] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0925 12:17:43.472415    1934 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2044776198/002/docker-machine-driver-hyperkit
I0925 12:17:46.556936    1934 install.go:79] stdout: 
W0925 12:17:46.557098    1934 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2044776198/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2044776198/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0925 12:17:46.557125    1934 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2044776198/002/docker-machine-driver-hyperkit]
I0925 12:17:46.570615    1934 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2044776198/002/docker-machine-driver-hyperkit]
I0925 12:17:46.582202    1934 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2044776198/002/docker-machine-driver-hyperkit]
I0925 12:17:46.590572    1934 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2044776198/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-884000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.848450125s)

                                                
                                                
-- stdout --
	* [force-systemd-env-884000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-884000" primary control-plane node in "force-systemd-env-884000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-884000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:17:40.651869    4742 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:17:40.652009    4742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:17:40.652012    4742 out.go:358] Setting ErrFile to fd 2...
	I0925 12:17:40.652015    4742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:17:40.652150    4742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:17:40.653274    4742 out.go:352] Setting JSON to false
	I0925 12:17:40.669915    4742 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4631,"bootTime":1727287229,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:17:40.669982    4742 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:17:40.675400    4742 out.go:177] * [force-systemd-env-884000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:17:40.684254    4742 notify.go:220] Checking for updates...
	I0925 12:17:40.688114    4742 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:17:40.696165    4742 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:17:40.704121    4742 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:17:40.712107    4742 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:17:40.716184    4742 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:17:40.719192    4742 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0925 12:17:40.722567    4742 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:17:40.722625    4742 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:17:40.727158    4742 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:17:40.734134    4742 start.go:297] selected driver: qemu2
	I0925 12:17:40.734140    4742 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:17:40.734146    4742 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:17:40.736387    4742 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:17:40.739113    4742 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:17:40.742280    4742 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0925 12:17:40.742300    4742 cni.go:84] Creating CNI manager for ""
	I0925 12:17:40.742321    4742 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:17:40.742330    4742 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 12:17:40.742354    4742 start.go:340] cluster config:
	{Name:force-systemd-env-884000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:17:40.746099    4742 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:17:40.756201    4742 out.go:177] * Starting "force-systemd-env-884000" primary control-plane node in "force-systemd-env-884000" cluster
	I0925 12:17:40.760005    4742 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:17:40.760018    4742 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:17:40.760028    4742 cache.go:56] Caching tarball of preloaded images
	I0925 12:17:40.760094    4742 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:17:40.760100    4742 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:17:40.760159    4742 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/force-systemd-env-884000/config.json ...
	I0925 12:17:40.760169    4742 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/force-systemd-env-884000/config.json: {Name:mk1e39c8878b77c4aab1fec3b70848051c5285ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:17:40.760365    4742 start.go:360] acquireMachinesLock for force-systemd-env-884000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:17:40.760399    4742 start.go:364] duration metric: took 27.417µs to acquireMachinesLock for "force-systemd-env-884000"
	I0925 12:17:40.760411    4742 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:17:40.760436    4742 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:17:40.768149    4742 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0925 12:17:40.786118    4742 start.go:159] libmachine.API.Create for "force-systemd-env-884000" (driver="qemu2")
	I0925 12:17:40.786146    4742 client.go:168] LocalClient.Create starting
	I0925 12:17:40.786217    4742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:17:40.786247    4742 main.go:141] libmachine: Decoding PEM data...
	I0925 12:17:40.786256    4742 main.go:141] libmachine: Parsing certificate...
	I0925 12:17:40.786294    4742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:17:40.786316    4742 main.go:141] libmachine: Decoding PEM data...
	I0925 12:17:40.786324    4742 main.go:141] libmachine: Parsing certificate...
	I0925 12:17:40.786673    4742 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:17:40.956511    4742 main.go:141] libmachine: Creating SSH key...
	I0925 12:17:41.013382    4742 main.go:141] libmachine: Creating Disk image...
	I0925 12:17:41.013388    4742 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:17:41.013562    4742 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-env-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-env-884000/disk.qcow2
	I0925 12:17:41.022949    4742 main.go:141] libmachine: STDOUT: 
	I0925 12:17:41.022969    4742 main.go:141] libmachine: STDERR: 
	I0925 12:17:41.023034    4742 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-env-884000/disk.qcow2 +20000M
	I0925 12:17:41.031276    4742 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:17:41.031293    4742 main.go:141] libmachine: STDERR: 
	I0925 12:17:41.031311    4742 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-env-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-env-884000/disk.qcow2
	I0925 12:17:41.031318    4742 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:17:41.031329    4742 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:17:41.031355    4742 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-env-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-env-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-env-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:59:10:59:98:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-env-884000/disk.qcow2
	I0925 12:17:41.033031    4742 main.go:141] libmachine: STDOUT: 
	I0925 12:17:41.033046    4742 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:17:41.033064    4742 client.go:171] duration metric: took 246.91625ms to LocalClient.Create
	I0925 12:17:43.035112    4742 start.go:128] duration metric: took 2.274709708s to createHost
	I0925 12:17:43.035130    4742 start.go:83] releasing machines lock for "force-systemd-env-884000", held for 2.274769333s
	W0925 12:17:43.035141    4742 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:17:43.048376    4742 out.go:177] * Deleting "force-systemd-env-884000" in qemu2 ...
	W0925 12:17:43.059877    4742 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:17:43.059885    4742 start.go:729] Will try again in 5 seconds ...
	I0925 12:17:48.062157    4742 start.go:360] acquireMachinesLock for force-systemd-env-884000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:17:49.038228    4742 start.go:364] duration metric: took 975.968041ms to acquireMachinesLock for "force-systemd-env-884000"
	I0925 12:17:49.038400    4742 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:17:49.038719    4742 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:17:49.055405    4742 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0925 12:17:49.103458    4742 start.go:159] libmachine.API.Create for "force-systemd-env-884000" (driver="qemu2")
	I0925 12:17:49.103504    4742 client.go:168] LocalClient.Create starting
	I0925 12:17:49.103637    4742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:17:49.103695    4742 main.go:141] libmachine: Decoding PEM data...
	I0925 12:17:49.103712    4742 main.go:141] libmachine: Parsing certificate...
	I0925 12:17:49.103771    4742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:17:49.103816    4742 main.go:141] libmachine: Decoding PEM data...
	I0925 12:17:49.103828    4742 main.go:141] libmachine: Parsing certificate...
	I0925 12:17:49.107987    4742 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:17:49.329487    4742 main.go:141] libmachine: Creating SSH key...
	I0925 12:17:49.395858    4742 main.go:141] libmachine: Creating Disk image...
	I0925 12:17:49.395863    4742 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:17:49.396066    4742 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-env-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-env-884000/disk.qcow2
	I0925 12:17:49.405504    4742 main.go:141] libmachine: STDOUT: 
	I0925 12:17:49.405531    4742 main.go:141] libmachine: STDERR: 
	I0925 12:17:49.405590    4742 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-env-884000/disk.qcow2 +20000M
	I0925 12:17:49.413404    4742 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:17:49.413427    4742 main.go:141] libmachine: STDERR: 
	I0925 12:17:49.413442    4742 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-env-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-env-884000/disk.qcow2
	I0925 12:17:49.413449    4742 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:17:49.413458    4742 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:17:49.413496    4742 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-env-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-env-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-env-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:4c:fd:c3:d6:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/force-systemd-env-884000/disk.qcow2
	I0925 12:17:49.415154    4742 main.go:141] libmachine: STDOUT: 
	I0925 12:17:49.415169    4742 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:17:49.415184    4742 client.go:171] duration metric: took 311.679958ms to LocalClient.Create
	I0925 12:17:51.417326    4742 start.go:128] duration metric: took 2.37861975s to createHost
	I0925 12:17:51.417506    4742 start.go:83] releasing machines lock for "force-systemd-env-884000", held for 2.379235625s
	W0925 12:17:51.417914    4742 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:17:51.437730    4742 out.go:201] 
	W0925 12:17:51.446635    4742 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:17:51.446664    4742 out.go:270] * 
	* 
	W0925 12:17:51.449183    4742 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:17:51.456512    4742 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-884000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-884000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-884000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.117542ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-884000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-884000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-884000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-25 12:17:51.549389 -0700 PDT m=+2947.639179376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-884000 -n force-systemd-env-884000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-884000 -n force-systemd-env-884000: exit status 7 (34.133666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-884000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-884000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-884000
--- FAIL: TestForceSystemdEnv (11.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (34.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-251000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-251000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-5vcdl" [eb406e4f-dce3-49db-af4c-7b7f7f880af3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-5vcdl" [eb406e4f-dce3-49db-af4c-7b7f7f880af3] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0925 11:47:38.831195    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
E0925 11:47:38.838585    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
E0925 11:47:38.851170    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.009666833s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:30279
functional_test.go:1661: error fetching http://192.168.105.4:30279: Get "http://192.168.105.4:30279": dial tcp 192.168.105.4:30279: connect: connection refused
I0925 11:47:44.478520    1934 retry.go:31] will retry after 513.728468ms: Get "http://192.168.105.4:30279": dial tcp 192.168.105.4:30279: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30279: Get "http://192.168.105.4:30279": dial tcp 192.168.105.4:30279: connect: connection refused
I0925 11:47:44.996105    1934 retry.go:31] will retry after 920.911764ms: Get "http://192.168.105.4:30279": dial tcp 192.168.105.4:30279: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30279: Get "http://192.168.105.4:30279": dial tcp 192.168.105.4:30279: connect: connection refused
I0925 11:47:45.920151    1934 retry.go:31] will retry after 1.746397946s: Get "http://192.168.105.4:30279": dial tcp 192.168.105.4:30279: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30279: Get "http://192.168.105.4:30279": dial tcp 192.168.105.4:30279: connect: connection refused
I0925 11:47:47.669150    1934 retry.go:31] will retry after 2.748230359s: Get "http://192.168.105.4:30279": dial tcp 192.168.105.4:30279: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30279: Get "http://192.168.105.4:30279": dial tcp 192.168.105.4:30279: connect: connection refused
I0925 11:47:50.421245    1934 retry.go:31] will retry after 5.365666274s: Get "http://192.168.105.4:30279": dial tcp 192.168.105.4:30279: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30279: Get "http://192.168.105.4:30279": dial tcp 192.168.105.4:30279: connect: connection refused
I0925 11:47:55.789723    1934 retry.go:31] will retry after 11.154380207s: Get "http://192.168.105.4:30279": dial tcp 192.168.105.4:30279: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30279: Get "http://192.168.105.4:30279": dial tcp 192.168.105.4:30279: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:30279: Get "http://192.168.105.4:30279": dial tcp 192.168.105.4:30279: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-251000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-5vcdl
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-251000/192.168.105.4
Start Time:       Wed, 25 Sep 2024 11:47:33 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://1966ee0e6267040df6f3635f82cac032c98d73f4be5056e153d8da68fb783560
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 25 Sep 2024 11:47:53 -0700
Finished:     Wed, 25 Sep 2024 11:47:54 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b8zgr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-b8zgr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  33s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-5vcdl to functional-251000
Normal   Pulling    33s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     30s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 2.815s (2.815s including waiting). Image size: 84957542 bytes.
Normal   Created    13s (x3 over 30s)  kubelet            Created container echoserver-arm
Normal   Pulled     13s (x2 over 29s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Started    12s (x3 over 30s)  kubelet            Started container echoserver-arm
Warning  BackOff    0s (x4 over 28s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-5vcdl_default(eb406e4f-dce3-49db-af4c-7b7f7f880af3)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-251000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-251000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.101.157.239
IPs:                      10.101.157.239
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30279/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-251000 -n functional-251000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-251000 ssh -- ls                                                                                          | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:47 PDT | 25 Sep 24 11:47 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-251000 ssh cat                                                                                            | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:47 PDT | 25 Sep 24 11:47 PDT |
	|           | /mount-9p/test-1727290073957401000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-251000 ssh stat                                                                                           | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:47 PDT | 25 Sep 24 11:47 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-251000 ssh stat                                                                                           | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:47 PDT | 25 Sep 24 11:47 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-251000 ssh sudo                                                                                           | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT | 25 Sep 24 11:48 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-251000                                                                                                 | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2735828830/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-251000 ssh findmnt                                                                                        | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-251000 ssh findmnt                                                                                        | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT | 25 Sep 24 11:48 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-251000 ssh -- ls                                                                                          | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT | 25 Sep 24 11:48 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-251000 ssh sudo                                                                                           | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-251000 ssh findmnt                                                                                        | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT | 25 Sep 24 11:48 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-251000                                                                                                 | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4278592423/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-251000                                                                                                 | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4278592423/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-251000                                                                                                 | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4278592423/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-251000 ssh findmnt                                                                                        | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT |                     |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-251000 ssh findmnt                                                                                        | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT | 25 Sep 24 11:48 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-251000 ssh findmnt                                                                                        | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT |                     |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-251000 ssh findmnt                                                                                        | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT | 25 Sep 24 11:48 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-251000 ssh findmnt                                                                                        | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT | 25 Sep 24 11:48 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-251000 ssh findmnt                                                                                        | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT | 25 Sep 24 11:48 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-251000                                                                                                 | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-251000                                                                                                 | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-251000                                                                                                 | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-251000 --dry-run                                                                                       | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-251000 | jenkins | v1.34.0 | 25 Sep 24 11:48 PDT |                     |
	|           | -p functional-251000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/25 11:48:03
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 11:48:03.628069    3193 out.go:345] Setting OutFile to fd 1 ...
	I0925 11:48:03.628193    3193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 11:48:03.628196    3193 out.go:358] Setting ErrFile to fd 2...
	I0925 11:48:03.628199    3193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 11:48:03.628332    3193 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 11:48:03.629349    3193 out.go:352] Setting JSON to false
	I0925 11:48:03.646187    3193 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2854,"bootTime":1727287229,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 11:48:03.646264    3193 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 11:48:03.651184    3193 out.go:177] * [functional-251000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 11:48:03.658092    3193 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 11:48:03.658176    3193 notify.go:220] Checking for updates...
	I0925 11:48:03.665250    3193 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 11:48:03.666667    3193 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 11:48:03.670175    3193 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 11:48:03.673203    3193 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 11:48:03.676212    3193 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 11:48:03.679602    3193 config.go:182] Loaded profile config "functional-251000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 11:48:03.679863    3193 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 11:48:03.684193    3193 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 11:48:03.691198    3193 start.go:297] selected driver: qemu2
	I0925 11:48:03.691208    3193 start.go:901] validating driver "qemu2" against &{Name:functional-251000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-251000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 11:48:03.691286    3193 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 11:48:03.693590    3193 cni.go:84] Creating CNI manager for ""
	I0925 11:48:03.693625    3193 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 11:48:03.693670    3193 start.go:340] cluster config:
	{Name:functional-251000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-251000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 11:48:03.704151    3193 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 25 18:47:56 functional-251000 dockerd[5687]: time="2024-09-25T18:47:56.838981265Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 18:47:58 functional-251000 dockerd[5680]: time="2024-09-25T18:47:58.934934592Z" level=info msg="ignoring event" container=4fa45675f6ad5b450720be4562afe15bbca924a3301dc5f52ae37c5302a5b8cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 18:47:58 functional-251000 dockerd[5687]: time="2024-09-25T18:47:58.935202864Z" level=info msg="shim disconnected" id=4fa45675f6ad5b450720be4562afe15bbca924a3301dc5f52ae37c5302a5b8cf namespace=moby
	Sep 25 18:47:58 functional-251000 dockerd[5687]: time="2024-09-25T18:47:58.935253925Z" level=warning msg="cleaning up after shim disconnected" id=4fa45675f6ad5b450720be4562afe15bbca924a3301dc5f52ae37c5302a5b8cf namespace=moby
	Sep 25 18:47:58 functional-251000 dockerd[5687]: time="2024-09-25T18:47:58.935259219Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 18:47:58 functional-251000 dockerd[5687]: time="2024-09-25T18:47:58.939596626Z" level=warning msg="cleanup warnings time=\"2024-09-25T18:47:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 25 18:48:04 functional-251000 dockerd[5687]: time="2024-09-25T18:48:04.010729753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 18:48:04 functional-251000 dockerd[5687]: time="2024-09-25T18:48:04.010758139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 18:48:04 functional-251000 dockerd[5687]: time="2024-09-25T18:48:04.010778314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 18:48:04 functional-251000 dockerd[5687]: time="2024-09-25T18:48:04.010810368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 18:48:04 functional-251000 dockerd[5680]: time="2024-09-25T18:48:04.056302777Z" level=info msg="ignoring event" container=384c055a79a40a9b732bd4a5fddde44d87b4c340e066db081d4f8de964eb0145 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 18:48:04 functional-251000 dockerd[5687]: time="2024-09-25T18:48:04.056338208Z" level=info msg="shim disconnected" id=384c055a79a40a9b732bd4a5fddde44d87b4c340e066db081d4f8de964eb0145 namespace=moby
	Sep 25 18:48:04 functional-251000 dockerd[5687]: time="2024-09-25T18:48:04.056362676Z" level=warning msg="cleaning up after shim disconnected" id=384c055a79a40a9b732bd4a5fddde44d87b4c340e066db081d4f8de964eb0145 namespace=moby
	Sep 25 18:48:04 functional-251000 dockerd[5687]: time="2024-09-25T18:48:04.056366719Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 18:48:04 functional-251000 dockerd[5687]: time="2024-09-25T18:48:04.663879071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 18:48:04 functional-251000 dockerd[5687]: time="2024-09-25T18:48:04.663915377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 18:48:04 functional-251000 dockerd[5687]: time="2024-09-25T18:48:04.663920713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 18:48:04 functional-251000 dockerd[5687]: time="2024-09-25T18:48:04.663947473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 18:48:04 functional-251000 dockerd[5687]: time="2024-09-25T18:48:04.669409187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 18:48:04 functional-251000 dockerd[5687]: time="2024-09-25T18:48:04.669621769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 18:48:04 functional-251000 dockerd[5687]: time="2024-09-25T18:48:04.669653198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 18:48:04 functional-251000 dockerd[5687]: time="2024-09-25T18:48:04.669737731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 18:48:04 functional-251000 cri-dockerd[5935]: time="2024-09-25T18:48:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6fa11a283d3207ca7d404fa6c0eb4a19dbcfd92686d3f9946ee04b03eec98fc9/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 25 18:48:04 functional-251000 cri-dockerd[5935]: time="2024-09-25T18:48:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b96694938246d9db3f7186334e8c507a4c65fb2bf95e2104693020a16cafd1b5/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 25 18:48:04 functional-251000 dockerd[5680]: time="2024-09-25T18:48:04.959684677Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" spanID=23b13692c62b6987 traceID=f7c4305cacb59323e5810166ed06e34e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	384c055a79a40       72565bf5bbedf                                                                                         4 seconds ago        Exited              echoserver-arm            2                   60d2866270fb8       hello-node-64b4f8f9ff-cgdns
	db4a2492fd13a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   11 seconds ago       Exited              mount-munger              0                   4fa45675f6ad5       busybox-mount
	1966ee0e62670       72565bf5bbedf                                                                                         14 seconds ago       Exited              echoserver-arm            2                   6da5e7317ff6d       hello-node-connect-65d86f57f4-5vcdl
	072352f615fbf       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                         27 seconds ago       Running             myfrontend                0                   3825d05fde017       sp-pod
	8ba8208f82957       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                         41 seconds ago       Running             nginx                     0                   00c657d2598ad       nginx-svc
	ee0fc76010cbd       2f6c962e7b831                                                                                         About a minute ago   Running             coredns                   2                   a86ca159333b8       coredns-7c65d6cfc9-56pz2
	a16330c34d962       24a140c548c07                                                                                         About a minute ago   Running             kube-proxy                2                   206515d38b6f2       kube-proxy-4lkvp
	036afe932b69a       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   86f692981fabe       storage-provisioner
	7ddca40832084       279f381cb3736                                                                                         About a minute ago   Running             kube-controller-manager   2                   29420890d9da6       kube-controller-manager-functional-251000
	490fa8f9990ae       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   3fc5160d45f22       etcd-functional-251000
	52f7c8f0ec238       7f8aa378bb47d                                                                                         About a minute ago   Running             kube-scheduler            2                   6e83a4154bd68       kube-scheduler-functional-251000
	22dcfacc4de95       d3f53a98c0a9d                                                                                         About a minute ago   Running             kube-apiserver            0                   886caeeed2727       kube-apiserver-functional-251000
	c31833ac86a41       2f6c962e7b831                                                                                         About a minute ago   Exited              coredns                   1                   06835619a960c       coredns-7c65d6cfc9-56pz2
	94a45ffcda9c7       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       1                   6b141bfcf2ea5       storage-provisioner
	2f1546397cf11       24a140c548c07                                                                                         About a minute ago   Exited              kube-proxy                1                   c3bc17546392f       kube-proxy-4lkvp
	f1fb5bc55cbcd       27e3830e14027                                                                                         About a minute ago   Exited              etcd                      1                   d987ce4aa8333       etcd-functional-251000
	84c5a6b4f48a9       279f381cb3736                                                                                         About a minute ago   Exited              kube-controller-manager   1                   3e28263584e1e       kube-controller-manager-functional-251000
	d0ceb1ca5c780       7f8aa378bb47d                                                                                         About a minute ago   Exited              kube-scheduler            1                   f33b791d3d340       kube-scheduler-functional-251000
	
	
	==> coredns [c31833ac86a4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60018 - 35767 "HINFO IN 2079567758905125896.7680292784947867385. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004542821s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ee0fc76010cb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46004 - 64757 "HINFO IN 5592587826985592534.6002743062139496575. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004706802s
	[INFO] 10.244.0.1:39131 - 55807 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000102584s
	[INFO] 10.244.0.1:27992 - 14578 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000099041s
	[INFO] 10.244.0.1:11395 - 63580 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000033472s
	[INFO] 10.244.0.1:7299 - 4650 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001131177s
	[INFO] 10.244.0.1:5283 - 31140 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000086661s
	[INFO] 10.244.0.1:48170 - 32319 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000255564s
	
	
	==> describe nodes <==
	Name:               functional-251000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-251000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cb9e6220ecbd737c1d09ad9630c6f144f437664a
	                    minikube.k8s.io/name=functional-251000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_25T11_45_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 25 Sep 2024 18:45:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-251000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 25 Sep 2024 18:47:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 25 Sep 2024 18:47:59 +0000   Wed, 25 Sep 2024 18:45:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 25 Sep 2024 18:47:59 +0000   Wed, 25 Sep 2024 18:45:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 25 Sep 2024 18:47:59 +0000   Wed, 25 Sep 2024 18:45:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 25 Sep 2024 18:47:59 +0000   Wed, 25 Sep 2024 18:45:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-251000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 eb337c51a7374e1aa4ade2c55e339585
	  System UUID:                eb337c51a7374e1aa4ade2c55e339585
	  Boot ID:                    5047a034-1530-4506-bddf-71dbe46d0152
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-cgdns                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  default                     hello-node-connect-65d86f57f4-5vcdl          0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 coredns-7c65d6cfc9-56pz2                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m19s
	  kube-system                 etcd-functional-251000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m24s
	  kube-system                 kube-apiserver-functional-251000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-controller-manager-functional-251000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-proxy-4lkvp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-functional-251000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-984m2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-9mmhs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m18s                  kube-proxy       
	  Normal  Starting                 68s                    kube-proxy       
	  Normal  Starting                 112s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  2m29s (x8 over 2m29s)  kubelet          Node functional-251000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m29s (x8 over 2m29s)  kubelet          Node functional-251000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m29s (x7 over 2m29s)  kubelet          Node functional-251000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m25s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m24s                  kubelet          Node functional-251000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m24s                  kubelet          Node functional-251000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m24s                  kubelet          Node functional-251000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m21s                  kubelet          Node functional-251000 status is now: NodeReady
	  Normal  RegisteredNode           2m20s                  node-controller  Node functional-251000 event: Registered Node functional-251000 in Controller
	  Normal  Starting                 116s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)    kubelet          Node functional-251000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)    kubelet          Node functional-251000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x7 over 116s)    kubelet          Node functional-251000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  116s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           110s                   node-controller  Node functional-251000 event: Registered Node functional-251000 in Controller
	  Normal  Starting                 72s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  72s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    71s (x8 over 72s)      kubelet          Node functional-251000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s (x7 over 72s)      kubelet          Node functional-251000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  71s (x8 over 72s)      kubelet          Node functional-251000 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           65s                    node-controller  Node functional-251000 event: Registered Node functional-251000 in Controller
	
	
	==> dmesg <==
	[  +3.414909] kauditd_printk_skb: 199 callbacks suppressed
	[  +6.882410] kauditd_printk_skb: 35 callbacks suppressed
	[  +7.271103] systemd-fstab-generator[4763]: Ignoring "noauto" option for root device
	[ +12.036648] systemd-fstab-generator[5210]: Ignoring "noauto" option for root device
	[  +0.052883] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.115792] systemd-fstab-generator[5244]: Ignoring "noauto" option for root device
	[  +0.116297] systemd-fstab-generator[5256]: Ignoring "noauto" option for root device
	[  +0.098722] systemd-fstab-generator[5270]: Ignoring "noauto" option for root device
	[  +5.116405] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.392374] systemd-fstab-generator[5888]: Ignoring "noauto" option for root device
	[  +0.096121] systemd-fstab-generator[5900]: Ignoring "noauto" option for root device
	[  +0.088747] systemd-fstab-generator[5912]: Ignoring "noauto" option for root device
	[  +0.104831] systemd-fstab-generator[5927]: Ignoring "noauto" option for root device
	[  +0.225060] systemd-fstab-generator[6097]: Ignoring "noauto" option for root device
	[  +1.170415] systemd-fstab-generator[6221]: Ignoring "noauto" option for root device
	[  +1.022268] kauditd_printk_skb: 179 callbacks suppressed
	[Sep25 18:47] kauditd_printk_skb: 51 callbacks suppressed
	[ +11.687554] systemd-fstab-generator[7267]: Ignoring "noauto" option for root device
	[  +5.322845] kauditd_printk_skb: 16 callbacks suppressed
	[  +7.497325] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.217979] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.224879] kauditd_printk_skb: 25 callbacks suppressed
	[  +7.624527] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.248399] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.056344] kauditd_printk_skb: 20 callbacks suppressed
	
	
	==> etcd [490fa8f9990a] <==
	{"level":"info","ts":"2024-09-25T18:46:57.032818Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-25T18:46:57.032834Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-25T18:46:57.032838Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-25T18:46:57.032929Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-25T18:46:57.032936Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-25T18:46:57.033430Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-09-25T18:46:57.033453Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-09-25T18:46:57.033483Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-25T18:46:57.033493Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-25T18:46:58.191279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-25T18:46:58.191431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-25T18:46:58.191478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-25T18:46:58.191511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-09-25T18:46:58.191527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-25T18:46:58.191559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-09-25T18:46:58.191584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-25T18:46:58.193662Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-251000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-25T18:46:58.193664Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-25T18:46:58.194336Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-25T18:46:58.194699Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-25T18:46:58.194743Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-25T18:46:58.196480Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-25T18:46:58.196480Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-25T18:46:58.198199Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-25T18:46:58.198855Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [f1fb5bc55cbc] <==
	{"level":"info","ts":"2024-09-25T18:46:13.927929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-25T18:46:13.928008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-25T18:46:13.928045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-09-25T18:46:13.928100Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-25T18:46:13.928147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-09-25T18:46:13.928168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-25T18:46:13.930550Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-251000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-25T18:46:13.930621Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-25T18:46:13.931328Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-25T18:46:13.931371Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-25T18:46:13.931411Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-25T18:46:13.932800Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-25T18:46:13.932887Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-25T18:46:13.935097Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-25T18:46:13.935804Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-25T18:46:41.749173Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-25T18:46:41.749203Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-251000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-09-25T18:46:41.749245Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-25T18:46:41.749289Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-25T18:46:41.770156Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-25T18:46:41.770183Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-25T18:46:41.770219Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-09-25T18:46:41.771880Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-25T18:46:41.771914Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-25T18:46:41.771920Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-251000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 18:48:07 up 2 min,  0 users,  load average: 0.46, 0.35, 0.15
	Linux functional-251000 5.10.207 #1 SMP PREEMPT Mon Sep 23 18:07:35 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [22dcfacc4de9] <==
	I0925 18:46:58.808378       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0925 18:46:58.808380       1 cache.go:39] Caches are synced for autoregister controller
	E0925 18:46:58.809252       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0925 18:46:58.814174       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0925 18:46:58.815275       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0925 18:46:58.815296       1 policy_source.go:224] refreshing policies
	I0925 18:46:58.829590       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0925 18:46:59.692647       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0925 18:47:00.046846       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0925 18:47:00.050466       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0925 18:47:00.066206       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0925 18:47:00.085304       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0925 18:47:00.090147       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0925 18:47:02.177203       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0925 18:47:02.276418       1 controller.go:615] quota admission added evaluator for: endpoints
	I0925 18:47:18.846461       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.127.213"}
	I0925 18:47:23.006573       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.140.238"}
	I0925 18:47:33.383884       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0925 18:47:33.422711       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.157.239"}
	E0925 18:47:39.051132       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49678: use of closed network connection
	E0925 18:47:46.693939       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49688: use of closed network connection
	I0925 18:47:46.771521       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.101.148.221"}
	I0925 18:48:04.261985       1 controller.go:615] quota admission added evaluator for: namespaces
	I0925 18:48:04.355936       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.70.2"}
	I0925 18:48:04.366703       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.181.2"}
	
	
	==> kube-controller-manager [7ddca4083208] <==
	I0925 18:47:48.712122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="42.726µs"
	I0925 18:47:54.785132       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="25.635µs"
	I0925 18:47:59.648093       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-251000"
	I0925 18:48:03.975893       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="21.258µs"
	I0925 18:48:04.291546       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="11.878383ms"
	E0925 18:48:04.291567       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0925 18:48:04.291613       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.022337ms"
	E0925 18:48:04.291647       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0925 18:48:04.297065       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="3.770012ms"
	E0925 18:48:04.297085       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0925 18:48:04.297491       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.119315ms"
	E0925 18:48:04.297502       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0925 18:48:04.300700       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="1.615255ms"
	E0925 18:48:04.300716       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0925 18:48:04.300735       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="1.759436ms"
	E0925 18:48:04.300742       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0925 18:48:04.329334       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="20.224597ms"
	I0925 18:48:04.329524       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="20.392078ms"
	I0925 18:48:04.348092       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="18.552736ms"
	I0925 18:48:04.348123       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.13µs"
	I0925 18:48:04.352553       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="23.199589ms"
	I0925 18:48:04.352594       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="18.132µs"
	I0925 18:48:04.355049       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="19.924µs"
	I0925 18:48:04.926207       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="22.009µs"
	I0925 18:48:06.973376       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="22.509µs"
	
	
	==> kube-controller-manager [84c5a6b4f48a] <==
	I0925 18:46:17.715314       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0925 18:46:17.716391       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0925 18:46:17.717540       1 shared_informer.go:320] Caches are synced for expand
	I0925 18:46:17.758055       1 shared_informer.go:320] Caches are synced for TTL
	I0925 18:46:17.759059       1 shared_informer.go:320] Caches are synced for persistent volume
	I0925 18:46:17.763497       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0925 18:46:17.765805       1 shared_informer.go:320] Caches are synced for node
	I0925 18:46:17.765862       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0925 18:46:17.765874       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0925 18:46:17.765877       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0925 18:46:17.765934       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0925 18:46:17.766050       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-251000"
	I0925 18:46:17.804839       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0925 18:46:17.858407       1 shared_informer.go:320] Caches are synced for attach detach
	I0925 18:46:17.883957       1 shared_informer.go:320] Caches are synced for resource quota
	I0925 18:46:17.914599       1 shared_informer.go:320] Caches are synced for stateful set
	I0925 18:46:17.924430       1 shared_informer.go:320] Caches are synced for daemon sets
	I0925 18:46:17.925601       1 shared_informer.go:320] Caches are synced for resource quota
	I0925 18:46:18.066387       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="407.33094ms"
	I0925 18:46:18.066454       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="39.07µs"
	I0925 18:46:18.364963       1 shared_informer.go:320] Caches are synced for garbage collector
	I0925 18:46:18.365025       1 shared_informer.go:320] Caches are synced for garbage collector
	I0925 18:46:18.365442       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0925 18:46:21.935366       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.589555ms"
	I0925 18:46:21.935633       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="60.29µs"
	
	
	==> kube-proxy [2f1546397cf1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0925 18:46:15.301829       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0925 18:46:15.307601       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0925 18:46:15.307636       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0925 18:46:15.320218       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0925 18:46:15.320237       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0925 18:46:15.320250       1 server_linux.go:169] "Using iptables Proxier"
	I0925 18:46:15.320951       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0925 18:46:15.321085       1 server.go:483] "Version info" version="v1.31.1"
	I0925 18:46:15.321119       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 18:46:15.321594       1 config.go:199] "Starting service config controller"
	I0925 18:46:15.321612       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0925 18:46:15.321670       1 config.go:105] "Starting endpoint slice config controller"
	I0925 18:46:15.321677       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0925 18:46:15.321882       1 config.go:328] "Starting node config controller"
	I0925 18:46:15.321978       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0925 18:46:15.422099       1 shared_informer.go:320] Caches are synced for node config
	I0925 18:46:15.422117       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0925 18:46:15.422166       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [a16330c34d96] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0925 18:46:59.511165       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0925 18:46:59.515015       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0925 18:46:59.515042       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0925 18:46:59.522326       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0925 18:46:59.522342       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0925 18:46:59.522352       1 server_linux.go:169] "Using iptables Proxier"
	I0925 18:46:59.523149       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0925 18:46:59.523310       1 server.go:483] "Version info" version="v1.31.1"
	I0925 18:46:59.523342       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 18:46:59.523814       1 config.go:199] "Starting service config controller"
	I0925 18:46:59.523829       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0925 18:46:59.523866       1 config.go:105] "Starting endpoint slice config controller"
	I0925 18:46:59.523873       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0925 18:46:59.524074       1 config.go:328] "Starting node config controller"
	I0925 18:46:59.524122       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0925 18:46:59.623999       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0925 18:46:59.623999       1 shared_informer.go:320] Caches are synced for service config
	I0925 18:46:59.624204       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [52f7c8f0ec23] <==
	I0925 18:46:57.737967       1 serving.go:386] Generated self-signed cert in-memory
	W0925 18:46:58.743178       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0925 18:46:58.743306       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0925 18:46:58.743336       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0925 18:46:58.743351       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0925 18:46:58.748780       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0925 18:46:58.748824       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 18:46:58.750323       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0925 18:46:58.750529       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0925 18:46:58.750538       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0925 18:46:58.750574       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0925 18:46:58.851430       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d0ceb1ca5c78] <==
	I0925 18:46:13.007977       1 serving.go:386] Generated self-signed cert in-memory
	W0925 18:46:14.450063       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0925 18:46:14.450081       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0925 18:46:14.450086       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0925 18:46:14.450089       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0925 18:46:14.476452       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0925 18:46:14.476487       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 18:46:14.489880       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0925 18:46:14.489898       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0925 18:46:14.490019       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0925 18:46:14.490122       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0925 18:46:14.590668       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0925 18:46:41.752283       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 25 18:47:55 functional-251000 kubelet[6228]: E0925 18:47:55.973458    6228 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 25 18:47:55 functional-251000 kubelet[6228]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 25 18:47:55 functional-251000 kubelet[6228]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 18:47:55 functional-251000 kubelet[6228]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 18:47:55 functional-251000 kubelet[6228]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 25 18:47:56 functional-251000 kubelet[6228]: I0925 18:47:56.051830    6228 scope.go:117] "RemoveContainer" containerID="9f66f74fc738d87b12ad33fe9559142c7569fec868bc47c219a7859fd62bd033"
	Sep 25 18:47:59 functional-251000 kubelet[6228]: I0925 18:47:59.041069    6228 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/3529e6c1-bec0-4f8d-b780-03b09af1ae5e-test-volume\") pod \"3529e6c1-bec0-4f8d-b780-03b09af1ae5e\" (UID: \"3529e6c1-bec0-4f8d-b780-03b09af1ae5e\") "
	Sep 25 18:47:59 functional-251000 kubelet[6228]: I0925 18:47:59.041095    6228 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3529e6c1-bec0-4f8d-b780-03b09af1ae5e-test-volume" (OuterVolumeSpecName: "test-volume") pod "3529e6c1-bec0-4f8d-b780-03b09af1ae5e" (UID: "3529e6c1-bec0-4f8d-b780-03b09af1ae5e"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 25 18:47:59 functional-251000 kubelet[6228]: I0925 18:47:59.041099    6228 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blsn5\" (UniqueName: \"kubernetes.io/projected/3529e6c1-bec0-4f8d-b780-03b09af1ae5e-kube-api-access-blsn5\") pod \"3529e6c1-bec0-4f8d-b780-03b09af1ae5e\" (UID: \"3529e6c1-bec0-4f8d-b780-03b09af1ae5e\") "
	Sep 25 18:47:59 functional-251000 kubelet[6228]: I0925 18:47:59.041124    6228 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/3529e6c1-bec0-4f8d-b780-03b09af1ae5e-test-volume\") on node \"functional-251000\" DevicePath \"\""
	Sep 25 18:47:59 functional-251000 kubelet[6228]: I0925 18:47:59.044421    6228 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3529e6c1-bec0-4f8d-b780-03b09af1ae5e-kube-api-access-blsn5" (OuterVolumeSpecName: "kube-api-access-blsn5") pod "3529e6c1-bec0-4f8d-b780-03b09af1ae5e" (UID: "3529e6c1-bec0-4f8d-b780-03b09af1ae5e"). InnerVolumeSpecName "kube-api-access-blsn5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 25 18:47:59 functional-251000 kubelet[6228]: I0925 18:47:59.142293    6228 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-blsn5\" (UniqueName: \"kubernetes.io/projected/3529e6c1-bec0-4f8d-b780-03b09af1ae5e-kube-api-access-blsn5\") on node \"functional-251000\" DevicePath \"\""
	Sep 25 18:47:59 functional-251000 kubelet[6228]: I0925 18:47:59.873472    6228 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fa45675f6ad5b450720be4562afe15bbca924a3301dc5f52ae37c5302a5b8cf"
	Sep 25 18:48:03 functional-251000 kubelet[6228]: I0925 18:48:03.968310    6228 scope.go:117] "RemoveContainer" containerID="5228fc7ad1fcb3e3818244f423088723044e22e95bae10070738f2f08611d61d"
	Sep 25 18:48:04 functional-251000 kubelet[6228]: E0925 18:48:04.328896    6228 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3529e6c1-bec0-4f8d-b780-03b09af1ae5e" containerName="mount-munger"
	Sep 25 18:48:04 functional-251000 kubelet[6228]: I0925 18:48:04.328941    6228 memory_manager.go:354] "RemoveStaleState removing state" podUID="3529e6c1-bec0-4f8d-b780-03b09af1ae5e" containerName="mount-munger"
	Sep 25 18:48:04 functional-251000 kubelet[6228]: I0925 18:48:04.491303    6228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdkvd\" (UniqueName: \"kubernetes.io/projected/cadd00e2-8ed6-4771-b2cc-31653a1fabfb-kube-api-access-gdkvd\") pod \"kubernetes-dashboard-695b96c756-9mmhs\" (UID: \"cadd00e2-8ed6-4771-b2cc-31653a1fabfb\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-9mmhs"
	Sep 25 18:48:04 functional-251000 kubelet[6228]: I0925 18:48:04.491346    6228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cadd00e2-8ed6-4771-b2cc-31653a1fabfb-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-9mmhs\" (UID: \"cadd00e2-8ed6-4771-b2cc-31653a1fabfb\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-9mmhs"
	Sep 25 18:48:04 functional-251000 kubelet[6228]: I0925 18:48:04.491357    6228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kzwj\" (UniqueName: \"kubernetes.io/projected/7b6e6403-5c5e-46a0-a3a5-8108cdc8e787-kube-api-access-7kzwj\") pod \"dashboard-metrics-scraper-c5db448b4-984m2\" (UID: \"7b6e6403-5c5e-46a0-a3a5-8108cdc8e787\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-984m2"
	Sep 25 18:48:04 functional-251000 kubelet[6228]: I0925 18:48:04.491368    6228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7b6e6403-5c5e-46a0-a3a5-8108cdc8e787-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-984m2\" (UID: \"7b6e6403-5c5e-46a0-a3a5-8108cdc8e787\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-984m2"
	Sep 25 18:48:04 functional-251000 kubelet[6228]: I0925 18:48:04.920829    6228 scope.go:117] "RemoveContainer" containerID="5228fc7ad1fcb3e3818244f423088723044e22e95bae10070738f2f08611d61d"
	Sep 25 18:48:04 functional-251000 kubelet[6228]: I0925 18:48:04.920986    6228 scope.go:117] "RemoveContainer" containerID="384c055a79a40a9b732bd4a5fddde44d87b4c340e066db081d4f8de964eb0145"
	Sep 25 18:48:04 functional-251000 kubelet[6228]: E0925 18:48:04.921059    6228 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-cgdns_default(e334014c-6953-4eac-a21a-14f40883b521)\"" pod="default/hello-node-64b4f8f9ff-cgdns" podUID="e334014c-6953-4eac-a21a-14f40883b521"
	Sep 25 18:48:06 functional-251000 kubelet[6228]: I0925 18:48:06.967328    6228 scope.go:117] "RemoveContainer" containerID="1966ee0e6267040df6f3635f82cac032c98d73f4be5056e153d8da68fb783560"
	Sep 25 18:48:06 functional-251000 kubelet[6228]: E0925 18:48:06.967643    6228 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-5vcdl_default(eb406e4f-dce3-49db-af4c-7b7f7f880af3)\"" pod="default/hello-node-connect-65d86f57f4-5vcdl" podUID="eb406e4f-dce3-49db-af4c-7b7f7f880af3"
	
	
	==> storage-provisioner [036afe932b69] <==
	I0925 18:46:59.447115       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0925 18:46:59.452322       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0925 18:46:59.452452       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0925 18:47:16.858236       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0925 18:47:16.858658       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1f7c1d50-50eb-45d6-8dea-f575d4fe2267", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-251000_91ae22ff-d8b5-4934-ba6a-2000c441349a became leader
	I0925 18:47:16.858927       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-251000_91ae22ff-d8b5-4934-ba6a-2000c441349a!
	I0925 18:47:16.959670       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-251000_91ae22ff-d8b5-4934-ba6a-2000c441349a!
	I0925 18:47:27.842764       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0925 18:47:27.843155       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    9645c22a-adde-4fbd-9133-dc0cdd92c82b 343 0 2024-09-25 18:45:48 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-25 18:45:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-e817d86c-d281-46d4-96fe-0c4ff73d14ce &PersistentVolumeClaim{ObjectMeta:{myclaim  default  e817d86c-d281-46d4-96fe-0c4ff73d14ce 686 0 2024-09-25 18:47:27 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-25 18:47:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-25 18:47:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0925 18:47:27.843694       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-e817d86c-d281-46d4-96fe-0c4ff73d14ce" provisioned
	I0925 18:47:27.843704       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0925 18:47:27.843722       1 volume_store.go:212] Trying to save persistentvolume "pvc-e817d86c-d281-46d4-96fe-0c4ff73d14ce"
	I0925 18:47:27.844235       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"e817d86c-d281-46d4-96fe-0c4ff73d14ce", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0925 18:47:27.848698       1 volume_store.go:219] persistentvolume "pvc-e817d86c-d281-46d4-96fe-0c4ff73d14ce" saved
	I0925 18:47:27.850135       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"e817d86c-d281-46d4-96fe-0c4ff73d14ce", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-e817d86c-d281-46d4-96fe-0c4ff73d14ce
	
	
	==> storage-provisioner [94a45ffcda9c] <==
	I0925 18:46:15.238434       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0925 18:46:15.243933       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0925 18:46:15.243963       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0925 18:46:15.250761       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0925 18:46:15.251079       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-251000_6c2ef340-ddd7-4555-89fc-dafb75c6225c!
	I0925 18:46:15.251126       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1f7c1d50-50eb-45d6-8dea-f575d4fe2267", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-251000_6c2ef340-ddd7-4555-89fc-dafb75c6225c became leader
	I0925 18:46:15.353804       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-251000_6c2ef340-ddd7-4555-89fc-dafb75c6225c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-251000 -n functional-251000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-251000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-c5db448b4-984m2 kubernetes-dashboard-695b96c756-9mmhs
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-251000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-984m2 kubernetes-dashboard-695b96c756-9mmhs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-251000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-984m2 kubernetes-dashboard-695b96c756-9mmhs: exit status 1 (41.249958ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-251000/192.168.105.4
	Start Time:       Wed, 25 Sep 2024 11:47:54 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://db4a2492fd13aba9791fcd99b23321a673528f0d6d9b9c9a0376023b2092c0b5
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 25 Sep 2024 11:47:56 -0700
	      Finished:     Wed, 25 Sep 2024 11:47:56 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-blsn5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-blsn5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  13s   default-scheduler  Successfully assigned default/busybox-mount to functional-251000
	  Normal  Pulling    13s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     12s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.462s (1.462s including waiting). Image size: 3547125 bytes.
	  Normal  Created    12s   kubelet            Created container mount-munger
	  Normal  Started    12s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-c5db448b4-984m2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-695b96c756-9mmhs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-251000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-984m2 kubernetes-dashboard-695b96c756-9mmhs: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (34.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (162.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 node stop m02 -v=7 --alsologtostderr
E0925 11:52:27.754393    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
E0925 11:52:32.877670    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-813000 node stop m02 -v=7 --alsologtostderr: (12.199489417s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr
E0925 11:52:38.824796    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
E0925 11:52:43.120847    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
E0925 11:53:03.604037    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
E0925 11:53:06.547646    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
E0925 11:53:44.566766    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Done: out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr: (1m15.067348917s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000
E0925 11:55:06.488606    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000: exit status 3 (1m15.036886417s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0925 11:55:07.829246    3630 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0925 11:55:07.829255    3630 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-813000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (162.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (150.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m15.061475416s)
ha_test.go:413: expected profile "ha-813000" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-813000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-813000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-813000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":
false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\
"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000
E0925 11:57:22.610833    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000: exit status 3 (1m15.064287083s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0925 11:57:37.950882    3648 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0925 11:57:37.950937    3648 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-813000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (150.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (185.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 node start m02 -v=7 --alsologtostderr
E0925 11:57:38.818890    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-813000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.123800292s)

                                                
                                                
-- stdout --
	* Starting "ha-813000-m02" control-plane node in "ha-813000" cluster
	* Restarting existing qemu2 VM for "ha-813000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-813000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 11:57:38.020333    3658 out.go:345] Setting OutFile to fd 1 ...
	I0925 11:57:38.020705    3658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 11:57:38.020711    3658 out.go:358] Setting ErrFile to fd 2...
	I0925 11:57:38.020714    3658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 11:57:38.020899    3658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 11:57:38.021273    3658 mustload.go:65] Loading cluster: ha-813000
	I0925 11:57:38.021619    3658 config.go:182] Loaded profile config "ha-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0925 11:57:38.021925    3658 host.go:58] "ha-813000-m02" host status: Stopped
	I0925 11:57:38.026296    3658 out.go:177] * Starting "ha-813000-m02" control-plane node in "ha-813000" cluster
	I0925 11:57:38.029359    3658 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 11:57:38.029376    3658 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 11:57:38.029389    3658 cache.go:56] Caching tarball of preloaded images
	I0925 11:57:38.029502    3658 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 11:57:38.029510    3658 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 11:57:38.029580    3658 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/ha-813000/config.json ...
	I0925 11:57:38.030065    3658 start.go:360] acquireMachinesLock for ha-813000-m02: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 11:57:38.030143    3658 start.go:364] duration metric: took 38.084µs to acquireMachinesLock for "ha-813000-m02"
	I0925 11:57:38.030154    3658 start.go:96] Skipping create...Using existing machine configuration
	I0925 11:57:38.030161    3658 fix.go:54] fixHost starting: m02
	I0925 11:57:38.030293    3658 fix.go:112] recreateIfNeeded on ha-813000-m02: state=Stopped err=<nil>
	W0925 11:57:38.030299    3658 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 11:57:38.033241    3658 out.go:177] * Restarting existing qemu2 VM for "ha-813000-m02" ...
	I0925 11:57:38.037290    3658 qemu.go:418] Using hvf for hardware acceleration
	I0925 11:57:38.037345    3658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:90:36:70:95:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000-m02/disk.qcow2
	I0925 11:57:38.039949    3658 main.go:141] libmachine: STDOUT: 
	I0925 11:57:38.039969    3658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 11:57:38.039998    3658 fix.go:56] duration metric: took 9.836625ms for fixHost
	I0925 11:57:38.040005    3658 start.go:83] releasing machines lock for "ha-813000-m02", held for 9.854458ms
	W0925 11:57:38.040012    3658 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 11:57:38.040039    3658 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 11:57:38.040043    3658 start.go:729] Will try again in 5 seconds ...
	I0925 11:57:43.042201    3658 start.go:360] acquireMachinesLock for ha-813000-m02: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 11:57:43.042629    3658 start.go:364] duration metric: took 328.25µs to acquireMachinesLock for "ha-813000-m02"
	I0925 11:57:43.042769    3658 start.go:96] Skipping create...Using existing machine configuration
	I0925 11:57:43.042781    3658 fix.go:54] fixHost starting: m02
	I0925 11:57:43.043318    3658 fix.go:112] recreateIfNeeded on ha-813000-m02: state=Stopped err=<nil>
	W0925 11:57:43.043336    3658 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 11:57:43.047818    3658 out.go:177] * Restarting existing qemu2 VM for "ha-813000-m02" ...
	I0925 11:57:43.051740    3658 qemu.go:418] Using hvf for hardware acceleration
	I0925 11:57:43.051865    3658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:90:36:70:95:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000-m02/disk.qcow2
	I0925 11:57:43.056620    3658 main.go:141] libmachine: STDOUT: 
	I0925 11:57:43.056651    3658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 11:57:43.056699    3658 fix.go:56] duration metric: took 13.919416ms for fixHost
	I0925 11:57:43.056709    3658 start.go:83] releasing machines lock for "ha-813000-m02", held for 14.011208ms
	W0925 11:57:43.056800    3658 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-813000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-813000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 11:57:43.060832    3658 out.go:201] 
	W0925 11:57:43.063748    3658 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 11:57:43.063764    3658 out.go:270] * 
	* 
	W0925 11:57:43.069443    3658 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 11:57:43.073765    3658 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0925 11:57:38.020333    3658 out.go:345] Setting OutFile to fd 1 ...
I0925 11:57:38.020705    3658 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 11:57:38.020711    3658 out.go:358] Setting ErrFile to fd 2...
I0925 11:57:38.020714    3658 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 11:57:38.020899    3658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
I0925 11:57:38.021273    3658 mustload.go:65] Loading cluster: ha-813000
I0925 11:57:38.021619    3658 config.go:182] Loaded profile config "ha-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W0925 11:57:38.021925    3658 host.go:58] "ha-813000-m02" host status: Stopped
I0925 11:57:38.026296    3658 out.go:177] * Starting "ha-813000-m02" control-plane node in "ha-813000" cluster
I0925 11:57:38.029359    3658 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0925 11:57:38.029376    3658 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0925 11:57:38.029389    3658 cache.go:56] Caching tarball of preloaded images
I0925 11:57:38.029502    3658 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0925 11:57:38.029510    3658 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0925 11:57:38.029580    3658 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/ha-813000/config.json ...
I0925 11:57:38.030065    3658 start.go:360] acquireMachinesLock for ha-813000-m02: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0925 11:57:38.030143    3658 start.go:364] duration metric: took 38.084µs to acquireMachinesLock for "ha-813000-m02"
I0925 11:57:38.030154    3658 start.go:96] Skipping create...Using existing machine configuration
I0925 11:57:38.030161    3658 fix.go:54] fixHost starting: m02
I0925 11:57:38.030293    3658 fix.go:112] recreateIfNeeded on ha-813000-m02: state=Stopped err=<nil>
W0925 11:57:38.030299    3658 fix.go:138] unexpected machine state, will restart: <nil>
I0925 11:57:38.033241    3658 out.go:177] * Restarting existing qemu2 VM for "ha-813000-m02" ...
I0925 11:57:38.037290    3658 qemu.go:418] Using hvf for hardware acceleration
I0925 11:57:38.037345    3658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:90:36:70:95:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000-m02/disk.qcow2
I0925 11:57:38.039949    3658 main.go:141] libmachine: STDOUT: 
I0925 11:57:38.039969    3658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0925 11:57:38.039998    3658 fix.go:56] duration metric: took 9.836625ms for fixHost
I0925 11:57:38.040005    3658 start.go:83] releasing machines lock for "ha-813000-m02", held for 9.854458ms
W0925 11:57:38.040012    3658 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0925 11:57:38.040039    3658 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0925 11:57:38.040043    3658 start.go:729] Will try again in 5 seconds ...
I0925 11:57:43.042201    3658 start.go:360] acquireMachinesLock for ha-813000-m02: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0925 11:57:43.042629    3658 start.go:364] duration metric: took 328.25µs to acquireMachinesLock for "ha-813000-m02"
I0925 11:57:43.042769    3658 start.go:96] Skipping create...Using existing machine configuration
I0925 11:57:43.042781    3658 fix.go:54] fixHost starting: m02
I0925 11:57:43.043318    3658 fix.go:112] recreateIfNeeded on ha-813000-m02: state=Stopped err=<nil>
W0925 11:57:43.043336    3658 fix.go:138] unexpected machine state, will restart: <nil>
I0925 11:57:43.047818    3658 out.go:177] * Restarting existing qemu2 VM for "ha-813000-m02" ...
I0925 11:57:43.051740    3658 qemu.go:418] Using hvf for hardware acceleration
I0925 11:57:43.051865    3658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:90:36:70:95:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000-m02/disk.qcow2
I0925 11:57:43.056620    3658 main.go:141] libmachine: STDOUT: 
I0925 11:57:43.056651    3658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0925 11:57:43.056699    3658 fix.go:56] duration metric: took 13.919416ms for fixHost
I0925 11:57:43.056709    3658 start.go:83] releasing machines lock for "ha-813000-m02", held for 14.011208ms
W0925 11:57:43.056800    3658 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-813000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-813000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0925 11:57:43.060832    3658 out.go:201] 
W0925 11:57:43.063748    3658 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0925 11:57:43.063764    3658 out.go:270] * 
* 
W0925 11:57:43.069443    3658 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0925 11:57:43.073765    3658 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-813000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr
E0925 11:57:50.329342    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Done: out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr: (1m15.06054s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
ha_test.go:448: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (30.075597417s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 192.168.105.254:8443: i/o timeout

                                                
                                                
** /stderr **
ha_test.go:450: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000: exit status 3 (1m15.078555792s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0925 12:00:43.287122    3678 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0925 12:00:43.287174    3678 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-813000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (185.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-813000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-813000 -v=7 --alsologtostderr
E0925 12:02:22.604608    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
E0925 12:02:38.812344    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
E0925 12:04:01.896921    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
E0925 12:07:22.598772    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-813000 -v=7 --alsologtostderr: (5m27.18210625s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-813000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-813000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.229275708s)

                                                
                                                
-- stdout --
	* [ha-813000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-813000" primary control-plane node in "ha-813000" cluster
	* Restarting existing qemu2 VM for "ha-813000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-813000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:07:25.692349    4045 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:07:25.692521    4045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:07:25.692525    4045 out.go:358] Setting ErrFile to fd 2...
	I0925 12:07:25.692528    4045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:07:25.692703    4045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:07:25.693780    4045 out.go:352] Setting JSON to false
	I0925 12:07:25.712735    4045 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4016,"bootTime":1727287229,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:07:25.712799    4045 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:07:25.718104    4045 out.go:177] * [ha-813000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:07:25.724115    4045 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:07:25.724159    4045 notify.go:220] Checking for updates...
	I0925 12:07:25.729050    4045 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:07:25.732103    4045 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:07:25.735054    4045 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:07:25.745743    4045 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:07:25.749102    4045 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:07:25.752377    4045 config.go:182] Loaded profile config "ha-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:07:25.752437    4045 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:07:25.757042    4045 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 12:07:25.764050    4045 start.go:297] selected driver: qemu2
	I0925 12:07:25.764056    4045 start.go:901] validating driver "qemu2" against &{Name:ha-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-813000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:07:25.764125    4045 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:07:25.766984    4045 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:07:25.767008    4045 cni.go:84] Creating CNI manager for ""
	I0925 12:07:25.767034    4045 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0925 12:07:25.767103    4045 start.go:340] cluster config:
	{Name:ha-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-813000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:07:25.771323    4045 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:07:25.779067    4045 out.go:177] * Starting "ha-813000" primary control-plane node in "ha-813000" cluster
	I0925 12:07:25.783097    4045 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:07:25.783113    4045 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:07:25.783121    4045 cache.go:56] Caching tarball of preloaded images
	I0925 12:07:25.783183    4045 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:07:25.783190    4045 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:07:25.783261    4045 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/ha-813000/config.json ...
	I0925 12:07:25.783697    4045 start.go:360] acquireMachinesLock for ha-813000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:07:25.783730    4045 start.go:364] duration metric: took 27.541µs to acquireMachinesLock for "ha-813000"
	I0925 12:07:25.783740    4045 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:07:25.783744    4045 fix.go:54] fixHost starting: 
	I0925 12:07:25.783862    4045 fix.go:112] recreateIfNeeded on ha-813000: state=Stopped err=<nil>
	W0925 12:07:25.783871    4045 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:07:25.788073    4045 out.go:177] * Restarting existing qemu2 VM for "ha-813000" ...
	I0925 12:07:25.796038    4045 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:07:25.796071    4045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:79:5c:d8:6a:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000/disk.qcow2
	I0925 12:07:25.798001    4045 main.go:141] libmachine: STDOUT: 
	I0925 12:07:25.798022    4045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:07:25.798054    4045 fix.go:56] duration metric: took 14.308625ms for fixHost
	I0925 12:07:25.798058    4045 start.go:83] releasing machines lock for "ha-813000", held for 14.324041ms
	W0925 12:07:25.798066    4045 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:07:25.798091    4045 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:07:25.798096    4045 start.go:729] Will try again in 5 seconds ...
	I0925 12:07:30.800268    4045 start.go:360] acquireMachinesLock for ha-813000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:07:30.800717    4045 start.go:364] duration metric: took 340.583µs to acquireMachinesLock for "ha-813000"
	I0925 12:07:30.800861    4045 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:07:30.800880    4045 fix.go:54] fixHost starting: 
	I0925 12:07:30.801613    4045 fix.go:112] recreateIfNeeded on ha-813000: state=Stopped err=<nil>
	W0925 12:07:30.801640    4045 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:07:30.810103    4045 out.go:177] * Restarting existing qemu2 VM for "ha-813000" ...
	I0925 12:07:30.813115    4045 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:07:30.813353    4045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:79:5c:d8:6a:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000/disk.qcow2
	I0925 12:07:30.822747    4045 main.go:141] libmachine: STDOUT: 
	I0925 12:07:30.822851    4045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:07:30.822982    4045 fix.go:56] duration metric: took 22.101416ms for fixHost
	I0925 12:07:30.823003    4045 start.go:83] releasing machines lock for "ha-813000", held for 22.261041ms
	W0925 12:07:30.823224    4045 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-813000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-813000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:07:30.832099    4045 out.go:201] 
	W0925 12:07:30.836019    4045 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:07:30.836044    4045 out.go:270] * 
	* 
	W0925 12:07:30.838448    4045 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:07:30.847096    4045 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-813000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-813000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000: exit status 7 (32.878708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-813000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-813000 node delete m03 -v=7 --alsologtostderr: exit status 83 (39.441417ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-813000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-813000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:07:30.994343    4060 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:07:30.994602    4060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:07:30.994605    4060 out.go:358] Setting ErrFile to fd 2...
	I0925 12:07:30.994607    4060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:07:30.994736    4060 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:07:30.994953    4060 mustload.go:65] Loading cluster: ha-813000
	I0925 12:07:30.995214    4060 config.go:182] Loaded profile config "ha-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0925 12:07:30.995526    4060 out.go:270] ! The control-plane node ha-813000 host is not running (will try others): state=Stopped
	! The control-plane node ha-813000 host is not running (will try others): state=Stopped
	W0925 12:07:30.995640    4060 out.go:270] ! The control-plane node ha-813000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-813000-m02 host is not running (will try others): state=Stopped
	I0925 12:07:31.000053    4060 out.go:177] * The control-plane node ha-813000-m03 host is not running: state=Stopped
	I0925 12:07:31.001188    4060 out.go:177]   To start a cluster, run: "minikube start -p ha-813000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-813000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr: exit status 7 (30.262458ms)

                                                
                                                
-- stdout --
	ha-813000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-813000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-813000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-813000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:07:31.032903    4062 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:07:31.033056    4062 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:07:31.033059    4062 out.go:358] Setting ErrFile to fd 2...
	I0925 12:07:31.033061    4062 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:07:31.033208    4062 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:07:31.033334    4062 out.go:352] Setting JSON to false
	I0925 12:07:31.033346    4062 mustload.go:65] Loading cluster: ha-813000
	I0925 12:07:31.033396    4062 notify.go:220] Checking for updates...
	I0925 12:07:31.033584    4062 config.go:182] Loaded profile config "ha-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:07:31.033593    4062 status.go:174] checking status of ha-813000 ...
	I0925 12:07:31.033848    4062 status.go:364] ha-813000 host status = "Stopped" (err=<nil>)
	I0925 12:07:31.033852    4062 status.go:377] host is not running, skipping remaining checks
	I0925 12:07:31.033854    4062 status.go:176] ha-813000 status: &{Name:ha-813000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0925 12:07:31.033864    4062 status.go:174] checking status of ha-813000-m02 ...
	I0925 12:07:31.033951    4062 status.go:364] ha-813000-m02 host status = "Stopped" (err=<nil>)
	I0925 12:07:31.033954    4062 status.go:377] host is not running, skipping remaining checks
	I0925 12:07:31.033956    4062 status.go:176] ha-813000-m02 status: &{Name:ha-813000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0925 12:07:31.033960    4062 status.go:174] checking status of ha-813000-m03 ...
	I0925 12:07:31.034045    4062 status.go:364] ha-813000-m03 host status = "Stopped" (err=<nil>)
	I0925 12:07:31.034048    4062 status.go:377] host is not running, skipping remaining checks
	I0925 12:07:31.034049    4062 status.go:176] ha-813000-m03 status: &{Name:ha-813000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0925 12:07:31.034053    4062 status.go:174] checking status of ha-813000-m04 ...
	I0925 12:07:31.034143    4062 status.go:364] ha-813000-m04 host status = "Stopped" (err=<nil>)
	I0925 12:07:31.034146    4062 status.go:377] host is not running, skipping remaining checks
	I0925 12:07:31.034148    4062 status.go:176] ha-813000-m04 status: &{Name:ha-813000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000: exit status 7 (30.396083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-813000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-813000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-813000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-813000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-813000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logv
iewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\
":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000: exit status 7 (30.084417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-813000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (300.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 stop -v=7 --alsologtostderr
E0925 12:07:38.806779    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
E0925 12:08:45.680060    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
E0925 12:12:22.592827    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-813000 stop -v=7 --alsologtostderr: (5m0.129701375s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr: exit status 7 (64.915583ms)

                                                
                                                
-- stdout --
	ha-813000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-813000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-813000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-813000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:12:31.331501    4093 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:12:31.331695    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:12:31.331700    4093 out.go:358] Setting ErrFile to fd 2...
	I0925 12:12:31.331703    4093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:12:31.331885    4093 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:12:31.332049    4093 out.go:352] Setting JSON to false
	I0925 12:12:31.332064    4093 mustload.go:65] Loading cluster: ha-813000
	I0925 12:12:31.332109    4093 notify.go:220] Checking for updates...
	I0925 12:12:31.332405    4093 config.go:182] Loaded profile config "ha-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:12:31.332415    4093 status.go:174] checking status of ha-813000 ...
	I0925 12:12:31.332712    4093 status.go:364] ha-813000 host status = "Stopped" (err=<nil>)
	I0925 12:12:31.332717    4093 status.go:377] host is not running, skipping remaining checks
	I0925 12:12:31.332719    4093 status.go:176] ha-813000 status: &{Name:ha-813000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0925 12:12:31.332732    4093 status.go:174] checking status of ha-813000-m02 ...
	I0925 12:12:31.332857    4093 status.go:364] ha-813000-m02 host status = "Stopped" (err=<nil>)
	I0925 12:12:31.332861    4093 status.go:377] host is not running, skipping remaining checks
	I0925 12:12:31.332863    4093 status.go:176] ha-813000-m02 status: &{Name:ha-813000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0925 12:12:31.332868    4093 status.go:174] checking status of ha-813000-m03 ...
	I0925 12:12:31.332981    4093 status.go:364] ha-813000-m03 host status = "Stopped" (err=<nil>)
	I0925 12:12:31.332984    4093 status.go:377] host is not running, skipping remaining checks
	I0925 12:12:31.332987    4093 status.go:176] ha-813000-m03 status: &{Name:ha-813000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0925 12:12:31.332991    4093 status.go:174] checking status of ha-813000-m04 ...
	I0925 12:12:31.333106    4093 status.go:364] ha-813000-m04 host status = "Stopped" (err=<nil>)
	I0925 12:12:31.333109    4093 status.go:377] host is not running, skipping remaining checks
	I0925 12:12:31.333111    4093 status.go:176] ha-813000-m04 status: &{Name:ha-813000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr": ha-813000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-813000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-813000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-813000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr": ha-813000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-813000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-813000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-813000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr": ha-813000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-813000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-813000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-813000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000: exit status 7 (32.37875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-813000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (300.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-813000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-813000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.1765625s)

                                                
                                                
-- stdout --
	* [ha-813000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-813000" primary control-plane node in "ha-813000" cluster
	* Restarting existing qemu2 VM for "ha-813000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-813000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:12:31.394880    4097 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:12:31.394993    4097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:12:31.394997    4097 out.go:358] Setting ErrFile to fd 2...
	I0925 12:12:31.395007    4097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:12:31.395136    4097 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:12:31.396166    4097 out.go:352] Setting JSON to false
	I0925 12:12:31.412183    4097 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4322,"bootTime":1727287229,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:12:31.412248    4097 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:12:31.416958    4097 out.go:177] * [ha-813000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:12:31.423858    4097 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:12:31.423907    4097 notify.go:220] Checking for updates...
	I0925 12:12:31.430924    4097 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:12:31.433881    4097 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:12:31.436884    4097 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:12:31.439953    4097 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:12:31.441202    4097 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:12:31.444272    4097 config.go:182] Loaded profile config "ha-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:12:31.444533    4097 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:12:31.448916    4097 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 12:12:31.453833    4097 start.go:297] selected driver: qemu2
	I0925 12:12:31.453841    4097 start.go:901] validating driver "qemu2" against &{Name:ha-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-813000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:12:31.453913    4097 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:12:31.456183    4097 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:12:31.456209    4097 cni.go:84] Creating CNI manager for ""
	I0925 12:12:31.456228    4097 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0925 12:12:31.456270    4097 start.go:340] cluster config:
	{Name:ha-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-813000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:12:31.459741    4097 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:12:31.467893    4097 out.go:177] * Starting "ha-813000" primary control-plane node in "ha-813000" cluster
	I0925 12:12:31.471931    4097 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:12:31.471947    4097 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:12:31.471957    4097 cache.go:56] Caching tarball of preloaded images
	I0925 12:12:31.472030    4097 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:12:31.472036    4097 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:12:31.472115    4097 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/ha-813000/config.json ...
	I0925 12:12:31.472514    4097 start.go:360] acquireMachinesLock for ha-813000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:12:31.472545    4097 start.go:364] duration metric: took 25.667µs to acquireMachinesLock for "ha-813000"
	I0925 12:12:31.472555    4097 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:12:31.472560    4097 fix.go:54] fixHost starting: 
	I0925 12:12:31.472669    4097 fix.go:112] recreateIfNeeded on ha-813000: state=Stopped err=<nil>
	W0925 12:12:31.472678    4097 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:12:31.476898    4097 out.go:177] * Restarting existing qemu2 VM for "ha-813000" ...
	I0925 12:12:31.484891    4097 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:12:31.484921    4097 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:79:5c:d8:6a:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000/disk.qcow2
	I0925 12:12:31.486694    4097 main.go:141] libmachine: STDOUT: 
	I0925 12:12:31.486711    4097 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:12:31.486740    4097 fix.go:56] duration metric: took 14.177958ms for fixHost
	I0925 12:12:31.486744    4097 start.go:83] releasing machines lock for "ha-813000", held for 14.195459ms
	W0925 12:12:31.486749    4097 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:12:31.486782    4097 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:12:31.486787    4097 start.go:729] Will try again in 5 seconds ...
	I0925 12:12:36.488951    4097 start.go:360] acquireMachinesLock for ha-813000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:12:36.489362    4097 start.go:364] duration metric: took 313.5µs to acquireMachinesLock for "ha-813000"
	I0925 12:12:36.489500    4097 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:12:36.489522    4097 fix.go:54] fixHost starting: 
	I0925 12:12:36.490238    4097 fix.go:112] recreateIfNeeded on ha-813000: state=Stopped err=<nil>
	W0925 12:12:36.490267    4097 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:12:36.494796    4097 out.go:177] * Restarting existing qemu2 VM for "ha-813000" ...
	I0925 12:12:36.501682    4097 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:12:36.502035    4097 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:79:5c:d8:6a:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/ha-813000/disk.qcow2
	I0925 12:12:36.510987    4097 main.go:141] libmachine: STDOUT: 
	I0925 12:12:36.511088    4097 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:12:36.511177    4097 fix.go:56] duration metric: took 21.654209ms for fixHost
	I0925 12:12:36.511203    4097 start.go:83] releasing machines lock for "ha-813000", held for 21.817459ms
	W0925 12:12:36.511428    4097 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-813000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-813000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:12:36.518663    4097 out.go:201] 
	W0925 12:12:36.522736    4097 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:12:36.522768    4097 out.go:270] * 
	* 
	W0925 12:12:36.525377    4097 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:12:36.531641    4097 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-813000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000: exit status 7 (70.070792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-813000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-813000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-813000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-813000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-813000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logv
iewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\
":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000: exit status 7 (29.371792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-813000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-813000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-813000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.266666ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-813000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-813000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:12:36.723526    4114 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:12:36.723718    4114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:12:36.723722    4114 out.go:358] Setting ErrFile to fd 2...
	I0925 12:12:36.723725    4114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:12:36.723832    4114 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:12:36.724050    4114 mustload.go:65] Loading cluster: ha-813000
	I0925 12:12:36.724297    4114 config.go:182] Loaded profile config "ha-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0925 12:12:36.724602    4114 out.go:270] ! The control-plane node ha-813000 host is not running (will try others): state=Stopped
	! The control-plane node ha-813000 host is not running (will try others): state=Stopped
	W0925 12:12:36.724730    4114 out.go:270] ! The control-plane node ha-813000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-813000-m02 host is not running (will try others): state=Stopped
	I0925 12:12:36.728968    4114 out.go:177] * The control-plane node ha-813000-m03 host is not running: state=Stopped
	I0925 12:12:36.732942    4114 out.go:177]   To start a cluster, run: "minikube start -p ha-813000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-813000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-813000 -n ha-813000: exit status 7 (30.058417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-813000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-864000 --driver=qemu2 
E0925 12:12:38.800555    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-864000 --driver=qemu2 : exit status 80 (9.983740875s)

                                                
                                                
-- stdout --
	* [image-864000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-864000" primary control-plane node in "image-864000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-864000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-864000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-864000 -n image-864000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-864000 -n image-864000: exit status 7 (68.461666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-864000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.8s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-457000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-457000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.797492417s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b29add3f-9228-4a22-896f-aff2249266c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-457000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"04696a96-f058-41b6-a2c0-0a082fa077e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19681"}}
	{"specversion":"1.0","id":"cafccf01-571b-44ce-851c-0e9a71d64905","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig"}}
	{"specversion":"1.0","id":"1501cee5-2440-4dc6-b421-d03144fb8829","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"3c08f7cb-c357-4838-a2e7-fedb0b8d4c07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"de6f6220-edb8-4a51-8c0d-c3ddc04beaff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube"}}
	{"specversion":"1.0","id":"f6cbb838-1c9c-432d-be69-ded1e9361d3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9ca15676-9eac-4db0-9978-9655f6ad8a53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"491d8273-5d38-41c4-8f70-3af26d7f840e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"a8b90e4c-d7af-48b8-a223-c2b60f4c3b96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-457000\" primary control-plane node in \"json-output-457000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fdeb804c-4da1-45b5-a841-d7e3fe73bf20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"43778221-8479-44b3-a82d-1b9a7e39110a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-457000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"f09a34c7-7530-474f-bb5d-123fb618da69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"a5804ad9-78b8-454b-ac38-5768b773205e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"9519d714-e952-442d-b097-2172a5aad5ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-457000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"590ceb22-c1c9-431f-9bef-a4c9cb6a5326","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"a445da74-773f-431f-a0c0-a2992fb27f3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-457000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-457000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-457000 --output=json --user=testUser: exit status 83 (79.29975ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ef97b589-7e7e-4c93-8918-052718605083","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-457000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"358d9f40-4b75-4000-8635-be4f461e1a24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-457000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-457000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-457000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-457000 --output=json --user=testUser: exit status 83 (45.337916ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-457000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-457000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-457000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-457000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.1s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-691000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-691000 --driver=qemu2 : exit status 80 (9.799697333s)

                                                
                                                
-- stdout --
	* [first-691000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-691000" primary control-plane node in "first-691000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-691000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-691000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-691000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-25 12:13:11.356782 -0700 PDT m=+2667.447807459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-693000 -n second-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-693000 -n second-693000: exit status 85 (78.721875ms)

                                                
                                                
-- stdout --
	* Profile "second-693000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-693000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-693000" host is not running, skipping log retrieval (state="* Profile \"second-693000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-693000\"")
helpers_test.go:175: Cleaning up "second-693000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-693000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-25 12:13:11.54663 -0700 PDT m=+2667.637659043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-691000 -n first-691000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-691000 -n first-691000: exit status 7 (30.162792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-691000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-691000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-691000
--- FAIL: TestMinikubeProfile (10.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-641000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-641000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.957573292s)

                                                
                                                
-- stdout --
	* [mount-start-1-641000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-641000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-641000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-641000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-641000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-641000 -n mount-start-1-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-641000 -n mount-start-1-641000: exit status 7 (69.998166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.03s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-761000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-761000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.872215375s)

                                                
                                                
-- stdout --
	* [multinode-761000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-761000" primary control-plane node in "multinode-761000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-761000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:13:21.905152    4259 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:13:21.905270    4259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:13:21.905273    4259 out.go:358] Setting ErrFile to fd 2...
	I0925 12:13:21.905275    4259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:13:21.905410    4259 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:13:21.906467    4259 out.go:352] Setting JSON to false
	I0925 12:13:21.922388    4259 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4372,"bootTime":1727287229,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:13:21.922462    4259 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:13:21.930056    4259 out.go:177] * [multinode-761000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:13:21.938907    4259 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:13:21.938968    4259 notify.go:220] Checking for updates...
	I0925 12:13:21.946818    4259 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:13:21.949924    4259 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:13:21.952857    4259 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:13:21.955795    4259 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:13:21.958833    4259 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:13:21.962042    4259 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:13:21.965772    4259 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:13:21.972875    4259 start.go:297] selected driver: qemu2
	I0925 12:13:21.972882    4259 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:13:21.972890    4259 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:13:21.975199    4259 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:13:21.977775    4259 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:13:21.980900    4259 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:13:21.980925    4259 cni.go:84] Creating CNI manager for ""
	I0925 12:13:21.980946    4259 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0925 12:13:21.980953    4259 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0925 12:13:21.980993    4259 start.go:340] cluster config:
	{Name:multinode-761000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:13:21.984623    4259 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:13:21.990828    4259 out.go:177] * Starting "multinode-761000" primary control-plane node in "multinode-761000" cluster
	I0925 12:13:21.994817    4259 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:13:21.994832    4259 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:13:21.994845    4259 cache.go:56] Caching tarball of preloaded images
	I0925 12:13:21.994923    4259 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:13:21.994935    4259 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:13:21.995147    4259 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/multinode-761000/config.json ...
	I0925 12:13:21.995159    4259 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/multinode-761000/config.json: {Name:mk1482a1b3c8476cad67a600eafe828f8c1a3bae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:13:21.995394    4259 start.go:360] acquireMachinesLock for multinode-761000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:13:21.995429    4259 start.go:364] duration metric: took 29.416µs to acquireMachinesLock for "multinode-761000"
	I0925 12:13:21.995443    4259 start.go:93] Provisioning new machine with config: &{Name:multinode-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:13:21.995475    4259 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:13:22.002821    4259 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 12:13:22.021075    4259 start.go:159] libmachine.API.Create for "multinode-761000" (driver="qemu2")
	I0925 12:13:22.021101    4259 client.go:168] LocalClient.Create starting
	I0925 12:13:22.021167    4259 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:13:22.021199    4259 main.go:141] libmachine: Decoding PEM data...
	I0925 12:13:22.021210    4259 main.go:141] libmachine: Parsing certificate...
	I0925 12:13:22.021247    4259 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:13:22.021271    4259 main.go:141] libmachine: Decoding PEM data...
	I0925 12:13:22.021280    4259 main.go:141] libmachine: Parsing certificate...
	I0925 12:13:22.021709    4259 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:13:22.186837    4259 main.go:141] libmachine: Creating SSH key...
	I0925 12:13:22.292988    4259 main.go:141] libmachine: Creating Disk image...
	I0925 12:13:22.292996    4259 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:13:22.293181    4259 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/disk.qcow2
	I0925 12:13:22.302221    4259 main.go:141] libmachine: STDOUT: 
	I0925 12:13:22.302251    4259 main.go:141] libmachine: STDERR: 
	I0925 12:13:22.302310    4259 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/disk.qcow2 +20000M
	I0925 12:13:22.310080    4259 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:13:22.310099    4259 main.go:141] libmachine: STDERR: 
	I0925 12:13:22.310117    4259 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/disk.qcow2
	I0925 12:13:22.310122    4259 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:13:22.310132    4259 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:13:22.310166    4259 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:49:11:88:47:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/disk.qcow2
	I0925 12:13:22.311778    4259 main.go:141] libmachine: STDOUT: 
	I0925 12:13:22.311791    4259 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:13:22.311811    4259 client.go:171] duration metric: took 290.708292ms to LocalClient.Create
	I0925 12:13:24.313975    4259 start.go:128] duration metric: took 2.318527041s to createHost
	I0925 12:13:24.314039    4259 start.go:83] releasing machines lock for "multinode-761000", held for 2.3186425s
	W0925 12:13:24.314141    4259 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:13:24.334502    4259 out.go:177] * Deleting "multinode-761000" in qemu2 ...
	W0925 12:13:24.367434    4259 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:13:24.367454    4259 start.go:729] Will try again in 5 seconds ...
	I0925 12:13:29.369560    4259 start.go:360] acquireMachinesLock for multinode-761000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:13:29.370108    4259 start.go:364] duration metric: took 429.417µs to acquireMachinesLock for "multinode-761000"
	I0925 12:13:29.370246    4259 start.go:93] Provisioning new machine with config: &{Name:multinode-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:13:29.370503    4259 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:13:29.390235    4259 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 12:13:29.440057    4259 start.go:159] libmachine.API.Create for "multinode-761000" (driver="qemu2")
	I0925 12:13:29.440102    4259 client.go:168] LocalClient.Create starting
	I0925 12:13:29.440222    4259 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:13:29.440285    4259 main.go:141] libmachine: Decoding PEM data...
	I0925 12:13:29.440303    4259 main.go:141] libmachine: Parsing certificate...
	I0925 12:13:29.440366    4259 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:13:29.440410    4259 main.go:141] libmachine: Decoding PEM data...
	I0925 12:13:29.440421    4259 main.go:141] libmachine: Parsing certificate...
	I0925 12:13:29.440927    4259 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:13:29.613671    4259 main.go:141] libmachine: Creating SSH key...
	I0925 12:13:29.679763    4259 main.go:141] libmachine: Creating Disk image...
	I0925 12:13:29.679769    4259 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:13:29.679952    4259 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/disk.qcow2
	I0925 12:13:29.689061    4259 main.go:141] libmachine: STDOUT: 
	I0925 12:13:29.689080    4259 main.go:141] libmachine: STDERR: 
	I0925 12:13:29.689136    4259 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/disk.qcow2 +20000M
	I0925 12:13:29.696954    4259 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:13:29.696968    4259 main.go:141] libmachine: STDERR: 
	I0925 12:13:29.696977    4259 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/disk.qcow2
	I0925 12:13:29.696981    4259 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:13:29.696988    4259 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:13:29.697013    4259 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:d9:43:49:4b:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/disk.qcow2
	I0925 12:13:29.698650    4259 main.go:141] libmachine: STDOUT: 
	I0925 12:13:29.698661    4259 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:13:29.698674    4259 client.go:171] duration metric: took 258.572334ms to LocalClient.Create
	I0925 12:13:31.700821    4259 start.go:128] duration metric: took 2.330322916s to createHost
	I0925 12:13:31.700961    4259 start.go:83] releasing machines lock for "multinode-761000", held for 2.33078125s
	W0925 12:13:31.701389    4259 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-761000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-761000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:13:31.718002    4259 out.go:201] 
	W0925 12:13:31.722056    4259 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:13:31.722081    4259 out.go:270] * 
	* 
	W0925 12:13:31.724651    4259 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:13:31.736806    4259 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-761000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000: exit status 7 (66.492042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (114.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-761000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-761000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (131.602042ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-761000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-761000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-761000 -- rollout status deployment/busybox: exit status 1 (58.5425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-761000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.132584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-761000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0925 12:13:32.066513    1934 retry.go:31] will retry after 1.347998376s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.753708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-761000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0925 12:13:33.520660    1934 retry.go:31] will retry after 1.762937215s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.886875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-761000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0925 12:13:35.389785    1934 retry.go:31] will retry after 1.446294326s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.269708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-761000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0925 12:13:36.941712    1934 retry.go:31] will retry after 4.508882867s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.986959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-761000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0925 12:13:41.555973    1934 retry.go:31] will retry after 6.549970382s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.98975ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-761000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0925 12:13:48.211259    1934 retry.go:31] will retry after 8.560373533s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.184291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-761000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0925 12:13:56.876993    1934 retry.go:31] will retry after 10.897634595s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.982083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-761000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0925 12:14:07.879950    1934 retry.go:31] will retry after 21.135886632s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.102792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-761000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0925 12:14:29.121903    1934 retry.go:31] will retry after 33.528378753s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.515375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-761000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0925 12:15:02.756658    1934 retry.go:31] will retry after 23.599279339s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.252125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-761000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.306542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-761000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-761000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-761000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.197542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-761000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-761000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-761000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.32125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-761000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-761000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-761000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.665958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-761000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000: exit status 7 (29.97925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (114.90s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-761000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.276833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-761000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000: exit status 7 (29.364541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-761000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-761000 -v 3 --alsologtostderr: exit status 83 (45.58275ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-761000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-761000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:15:26.839114    4346 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:15:26.839262    4346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:26.839265    4346 out.go:358] Setting ErrFile to fd 2...
	I0925 12:15:26.839267    4346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:26.839394    4346 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:15:26.839627    4346 mustload.go:65] Loading cluster: multinode-761000
	I0925 12:15:26.839855    4346 config.go:182] Loaded profile config "multinode-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:15:26.845646    4346 out.go:177] * The control-plane node multinode-761000 host is not running: state=Stopped
	I0925 12:15:26.850594    4346 out.go:177]   To start a cluster, run: "minikube start -p multinode-761000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-761000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000: exit status 7 (30.282791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-761000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-761000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.4145ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-761000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-761000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-761000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000: exit status 7 (30.481041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-761000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-761000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-761000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-761000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000: exit status 7 (30.185042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-761000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-761000 status --output json --alsologtostderr: exit status 7 (30.323041ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-761000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:15:27.052604    4358 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:15:27.052765    4358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:27.052768    4358 out.go:358] Setting ErrFile to fd 2...
	I0925 12:15:27.052771    4358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:27.052900    4358 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:15:27.053019    4358 out.go:352] Setting JSON to true
	I0925 12:15:27.053031    4358 mustload.go:65] Loading cluster: multinode-761000
	I0925 12:15:27.053104    4358 notify.go:220] Checking for updates...
	I0925 12:15:27.053251    4358 config.go:182] Loaded profile config "multinode-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:15:27.053260    4358 status.go:174] checking status of multinode-761000 ...
	I0925 12:15:27.053501    4358 status.go:364] multinode-761000 host status = "Stopped" (err=<nil>)
	I0925 12:15:27.053504    4358 status.go:377] host is not running, skipping remaining checks
	I0925 12:15:27.053506    4358 status.go:176] multinode-761000 status: &{Name:multinode-761000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-761000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000: exit status 7 (29.9815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-761000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-761000 node stop m03: exit status 85 (45.574ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-761000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-761000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-761000 status: exit status 7 (29.543875ms)

                                                
                                                
-- stdout --
	multinode-761000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-761000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-761000 status --alsologtostderr: exit status 7 (30.3045ms)

                                                
                                                
-- stdout --
	multinode-761000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:15:27.188801    4366 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:15:27.188956    4366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:27.188959    4366 out.go:358] Setting ErrFile to fd 2...
	I0925 12:15:27.188962    4366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:27.189108    4366 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:15:27.189233    4366 out.go:352] Setting JSON to false
	I0925 12:15:27.189243    4366 mustload.go:65] Loading cluster: multinode-761000
	I0925 12:15:27.189292    4366 notify.go:220] Checking for updates...
	I0925 12:15:27.189462    4366 config.go:182] Loaded profile config "multinode-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:15:27.189470    4366 status.go:174] checking status of multinode-761000 ...
	I0925 12:15:27.189716    4366 status.go:364] multinode-761000 host status = "Stopped" (err=<nil>)
	I0925 12:15:27.189720    4366 status.go:377] host is not running, skipping remaining checks
	I0925 12:15:27.189722    4366 status.go:176] multinode-761000 status: &{Name:multinode-761000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-761000 status --alsologtostderr": multinode-761000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000: exit status 7 (30.226084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (51.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-761000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-761000 node start m03 -v=7 --alsologtostderr: exit status 85 (45.343584ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:15:27.249270    4370 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:15:27.249517    4370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:27.249520    4370 out.go:358] Setting ErrFile to fd 2...
	I0925 12:15:27.249523    4370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:27.249649    4370 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:15:27.249878    4370 mustload.go:65] Loading cluster: multinode-761000
	I0925 12:15:27.250085    4370 config.go:182] Loaded profile config "multinode-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:15:27.254629    4370 out.go:201] 
	W0925 12:15:27.257649    4370 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0925 12:15:27.257654    4370 out.go:270] * 
	* 
	W0925 12:15:27.259380    4370 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:15:27.262653    4370 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0925 12:15:27.249270    4370 out.go:345] Setting OutFile to fd 1 ...
I0925 12:15:27.249517    4370 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 12:15:27.249520    4370 out.go:358] Setting ErrFile to fd 2...
I0925 12:15:27.249523    4370 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 12:15:27.249649    4370 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
I0925 12:15:27.249878    4370 mustload.go:65] Loading cluster: multinode-761000
I0925 12:15:27.250085    4370 config.go:182] Loaded profile config "multinode-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 12:15:27.254629    4370 out.go:201] 
W0925 12:15:27.257649    4370 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0925 12:15:27.257654    4370 out.go:270] * 
* 
W0925 12:15:27.259380    4370 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0925 12:15:27.262653    4370 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-761000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-761000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-761000 status -v=7 --alsologtostderr: exit status 7 (30.55075ms)

                                                
                                                
-- stdout --
	multinode-761000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:15:27.295492    4372 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:15:27.295639    4372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:27.295643    4372 out.go:358] Setting ErrFile to fd 2...
	I0925 12:15:27.295645    4372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:27.295795    4372 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:15:27.295919    4372 out.go:352] Setting JSON to false
	I0925 12:15:27.295930    4372 mustload.go:65] Loading cluster: multinode-761000
	I0925 12:15:27.295986    4372 notify.go:220] Checking for updates...
	I0925 12:15:27.296130    4372 config.go:182] Loaded profile config "multinode-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:15:27.296139    4372 status.go:174] checking status of multinode-761000 ...
	I0925 12:15:27.296376    4372 status.go:364] multinode-761000 host status = "Stopped" (err=<nil>)
	I0925 12:15:27.296379    4372 status.go:377] host is not running, skipping remaining checks
	I0925 12:15:27.296381    4372 status.go:176] multinode-761000 status: &{Name:multinode-761000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0925 12:15:27.297190    1934 retry.go:31] will retry after 802.600476ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-761000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-761000 status -v=7 --alsologtostderr: exit status 7 (75.180208ms)

                                                
                                                
-- stdout --
	multinode-761000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:15:28.175171    4374 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:15:28.175396    4374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:28.175404    4374 out.go:358] Setting ErrFile to fd 2...
	I0925 12:15:28.175408    4374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:28.175579    4374 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:15:28.175743    4374 out.go:352] Setting JSON to false
	I0925 12:15:28.175757    4374 mustload.go:65] Loading cluster: multinode-761000
	I0925 12:15:28.175791    4374 notify.go:220] Checking for updates...
	I0925 12:15:28.176034    4374 config.go:182] Loaded profile config "multinode-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:15:28.176048    4374 status.go:174] checking status of multinode-761000 ...
	I0925 12:15:28.176346    4374 status.go:364] multinode-761000 host status = "Stopped" (err=<nil>)
	I0925 12:15:28.176351    4374 status.go:377] host is not running, skipping remaining checks
	I0925 12:15:28.176354    4374 status.go:176] multinode-761000 status: &{Name:multinode-761000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0925 12:15:28.177415    1934 retry.go:31] will retry after 1.662677515s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-761000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-761000 status -v=7 --alsologtostderr: exit status 7 (74.388292ms)

                                                
                                                
-- stdout --
	multinode-761000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:15:29.914629    4376 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:15:29.914899    4376 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:29.914904    4376 out.go:358] Setting ErrFile to fd 2...
	I0925 12:15:29.914908    4376 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:29.915083    4376 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:15:29.915259    4376 out.go:352] Setting JSON to false
	I0925 12:15:29.915274    4376 mustload.go:65] Loading cluster: multinode-761000
	I0925 12:15:29.915316    4376 notify.go:220] Checking for updates...
	I0925 12:15:29.915578    4376 config.go:182] Loaded profile config "multinode-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:15:29.915591    4376 status.go:174] checking status of multinode-761000 ...
	I0925 12:15:29.915913    4376 status.go:364] multinode-761000 host status = "Stopped" (err=<nil>)
	I0925 12:15:29.915918    4376 status.go:377] host is not running, skipping remaining checks
	I0925 12:15:29.915921    4376 status.go:176] multinode-761000 status: &{Name:multinode-761000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0925 12:15:29.916978    1934 retry.go:31] will retry after 1.490781275s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-761000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-761000 status -v=7 --alsologtostderr: exit status 7 (73.2855ms)

                                                
                                                
-- stdout --
	multinode-761000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:15:31.481323    4378 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:15:31.481516    4378 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:31.481521    4378 out.go:358] Setting ErrFile to fd 2...
	I0925 12:15:31.481524    4378 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:31.481700    4378 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:15:31.481851    4378 out.go:352] Setting JSON to false
	I0925 12:15:31.481864    4378 mustload.go:65] Loading cluster: multinode-761000
	I0925 12:15:31.481901    4378 notify.go:220] Checking for updates...
	I0925 12:15:31.482161    4378 config.go:182] Loaded profile config "multinode-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:15:31.482171    4378 status.go:174] checking status of multinode-761000 ...
	I0925 12:15:31.482500    4378 status.go:364] multinode-761000 host status = "Stopped" (err=<nil>)
	I0925 12:15:31.482505    4378 status.go:377] host is not running, skipping remaining checks
	I0925 12:15:31.482507    4378 status.go:176] multinode-761000 status: &{Name:multinode-761000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0925 12:15:31.483507    1934 retry.go:31] will retry after 3.520987701s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-761000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-761000 status -v=7 --alsologtostderr: exit status 7 (73.628917ms)

                                                
                                                
-- stdout --
	multinode-761000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:15:35.078460    4381 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:15:35.078709    4381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:35.078713    4381 out.go:358] Setting ErrFile to fd 2...
	I0925 12:15:35.078717    4381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:35.078874    4381 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:15:35.079048    4381 out.go:352] Setting JSON to false
	I0925 12:15:35.079065    4381 mustload.go:65] Loading cluster: multinode-761000
	I0925 12:15:35.079105    4381 notify.go:220] Checking for updates...
	I0925 12:15:35.079354    4381 config.go:182] Loaded profile config "multinode-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:15:35.079369    4381 status.go:174] checking status of multinode-761000 ...
	I0925 12:15:35.079702    4381 status.go:364] multinode-761000 host status = "Stopped" (err=<nil>)
	I0925 12:15:35.079708    4381 status.go:377] host is not running, skipping remaining checks
	I0925 12:15:35.079711    4381 status.go:176] multinode-761000 status: &{Name:multinode-761000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0925 12:15:35.080808    1934 retry.go:31] will retry after 3.174339996s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-761000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-761000 status -v=7 --alsologtostderr: exit status 7 (73.973042ms)

                                                
                                                
-- stdout --
	multinode-761000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:15:38.329433    4383 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:15:38.329658    4383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:38.329662    4383 out.go:358] Setting ErrFile to fd 2...
	I0925 12:15:38.329665    4383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:38.329829    4383 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:15:38.330006    4383 out.go:352] Setting JSON to false
	I0925 12:15:38.330020    4383 mustload.go:65] Loading cluster: multinode-761000
	I0925 12:15:38.330066    4383 notify.go:220] Checking for updates...
	I0925 12:15:38.330311    4383 config.go:182] Loaded profile config "multinode-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:15:38.330322    4383 status.go:174] checking status of multinode-761000 ...
	I0925 12:15:38.330630    4383 status.go:364] multinode-761000 host status = "Stopped" (err=<nil>)
	I0925 12:15:38.330635    4383 status.go:377] host is not running, skipping remaining checks
	I0925 12:15:38.330638    4383 status.go:176] multinode-761000 status: &{Name:multinode-761000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0925 12:15:38.331670    1934 retry.go:31] will retry after 10.121292693s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-761000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-761000 status -v=7 --alsologtostderr: exit status 7 (72.87225ms)

                                                
                                                
-- stdout --
	multinode-761000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:15:48.526430    4385 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:15:48.526615    4385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:48.526620    4385 out.go:358] Setting ErrFile to fd 2...
	I0925 12:15:48.526623    4385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:15:48.526804    4385 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:15:48.526956    4385 out.go:352] Setting JSON to false
	I0925 12:15:48.526969    4385 mustload.go:65] Loading cluster: multinode-761000
	I0925 12:15:48.527014    4385 notify.go:220] Checking for updates...
	I0925 12:15:48.527236    4385 config.go:182] Loaded profile config "multinode-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:15:48.527246    4385 status.go:174] checking status of multinode-761000 ...
	I0925 12:15:48.527605    4385 status.go:364] multinode-761000 host status = "Stopped" (err=<nil>)
	I0925 12:15:48.527610    4385 status.go:377] host is not running, skipping remaining checks
	I0925 12:15:48.527612    4385 status.go:176] multinode-761000 status: &{Name:multinode-761000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0925 12:15:48.528618    1934 retry.go:31] will retry after 12.711709304s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-761000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-761000 status -v=7 --alsologtostderr: exit status 7 (73.026209ms)

                                                
                                                
-- stdout --
	multinode-761000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:16:01.313596    4389 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:16:01.313808    4389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:16:01.313812    4389 out.go:358] Setting ErrFile to fd 2...
	I0925 12:16:01.313816    4389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:16:01.313974    4389 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:16:01.314150    4389 out.go:352] Setting JSON to false
	I0925 12:16:01.314164    4389 mustload.go:65] Loading cluster: multinode-761000
	I0925 12:16:01.314204    4389 notify.go:220] Checking for updates...
	I0925 12:16:01.314464    4389 config.go:182] Loaded profile config "multinode-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:16:01.314474    4389 status.go:174] checking status of multinode-761000 ...
	I0925 12:16:01.314796    4389 status.go:364] multinode-761000 host status = "Stopped" (err=<nil>)
	I0925 12:16:01.314802    4389 status.go:377] host is not running, skipping remaining checks
	I0925 12:16:01.314804    4389 status.go:176] multinode-761000 status: &{Name:multinode-761000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0925 12:16:01.315853    1934 retry.go:31] will retry after 17.706098826s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-761000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-761000 status -v=7 --alsologtostderr: exit status 7 (74.237125ms)

                                                
                                                
-- stdout --
	multinode-761000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:16:19.096198    4392 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:16:19.096426    4392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:16:19.096430    4392 out.go:358] Setting ErrFile to fd 2...
	I0925 12:16:19.096434    4392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:16:19.096636    4392 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:16:19.096813    4392 out.go:352] Setting JSON to false
	I0925 12:16:19.096828    4392 mustload.go:65] Loading cluster: multinode-761000
	I0925 12:16:19.096862    4392 notify.go:220] Checking for updates...
	I0925 12:16:19.097128    4392 config.go:182] Loaded profile config "multinode-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:16:19.097138    4392 status.go:174] checking status of multinode-761000 ...
	I0925 12:16:19.097467    4392 status.go:364] multinode-761000 host status = "Stopped" (err=<nil>)
	I0925 12:16:19.097472    4392 status.go:377] host is not running, skipping remaining checks
	I0925 12:16:19.097474    4392 status.go:176] multinode-761000 status: &{Name:multinode-761000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-761000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000: exit status 7 (33.287ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (51.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-761000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-761000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-761000: (3.640435834s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-761000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-761000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.215916792s)

                                                
                                                
-- stdout --
	* [multinode-761000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-761000" primary control-plane node in "multinode-761000" cluster
	* Restarting existing qemu2 VM for "multinode-761000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-761000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:16:22.866796    4416 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:16:22.866989    4416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:16:22.866994    4416 out.go:358] Setting ErrFile to fd 2...
	I0925 12:16:22.866997    4416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:16:22.867152    4416 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:16:22.868366    4416 out.go:352] Setting JSON to false
	I0925 12:16:22.888165    4416 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4553,"bootTime":1727287229,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:16:22.888237    4416 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:16:22.892557    4416 out.go:177] * [multinode-761000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:16:22.898315    4416 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:16:22.898346    4416 notify.go:220] Checking for updates...
	I0925 12:16:22.904275    4416 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:16:22.907312    4416 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:16:22.910331    4416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:16:22.911717    4416 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:16:22.914267    4416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:16:22.917642    4416 config.go:182] Loaded profile config "multinode-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:16:22.917709    4416 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:16:22.922282    4416 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 12:16:22.929326    4416 start.go:297] selected driver: qemu2
	I0925 12:16:22.929337    4416 start.go:901] validating driver "qemu2" against &{Name:multinode-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:16:22.929420    4416 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:16:22.931855    4416 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:16:22.931880    4416 cni.go:84] Creating CNI manager for ""
	I0925 12:16:22.931910    4416 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0925 12:16:22.931953    4416 start.go:340] cluster config:
	{Name:multinode-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-761000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:16:22.935461    4416 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:16:22.942309    4416 out.go:177] * Starting "multinode-761000" primary control-plane node in "multinode-761000" cluster
	I0925 12:16:22.946317    4416 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:16:22.946331    4416 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:16:22.946337    4416 cache.go:56] Caching tarball of preloaded images
	I0925 12:16:22.946400    4416 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:16:22.946406    4416 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:16:22.946456    4416 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/multinode-761000/config.json ...
	I0925 12:16:22.946782    4416 start.go:360] acquireMachinesLock for multinode-761000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:16:22.946816    4416 start.go:364] duration metric: took 28.333µs to acquireMachinesLock for "multinode-761000"
	I0925 12:16:22.946826    4416 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:16:22.946831    4416 fix.go:54] fixHost starting: 
	I0925 12:16:22.946958    4416 fix.go:112] recreateIfNeeded on multinode-761000: state=Stopped err=<nil>
	W0925 12:16:22.946967    4416 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:16:22.955290    4416 out.go:177] * Restarting existing qemu2 VM for "multinode-761000" ...
	I0925 12:16:22.959263    4416 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:16:22.959297    4416 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:d9:43:49:4b:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/disk.qcow2
	I0925 12:16:22.961362    4416 main.go:141] libmachine: STDOUT: 
	I0925 12:16:22.961382    4416 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:16:22.961412    4416 fix.go:56] duration metric: took 14.579334ms for fixHost
	I0925 12:16:22.961417    4416 start.go:83] releasing machines lock for "multinode-761000", held for 14.596458ms
	W0925 12:16:22.961424    4416 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:16:22.961463    4416 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:16:22.961468    4416 start.go:729] Will try again in 5 seconds ...
	I0925 12:16:27.963662    4416 start.go:360] acquireMachinesLock for multinode-761000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:16:27.964061    4416 start.go:364] duration metric: took 316.667µs to acquireMachinesLock for "multinode-761000"
	I0925 12:16:27.964192    4416 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:16:27.964211    4416 fix.go:54] fixHost starting: 
	I0925 12:16:27.964928    4416 fix.go:112] recreateIfNeeded on multinode-761000: state=Stopped err=<nil>
	W0925 12:16:27.964954    4416 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:16:27.969201    4416 out.go:177] * Restarting existing qemu2 VM for "multinode-761000" ...
	I0925 12:16:27.973353    4416 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:16:27.973614    4416 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:d9:43:49:4b:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/disk.qcow2
	I0925 12:16:27.982588    4416 main.go:141] libmachine: STDOUT: 
	I0925 12:16:27.982649    4416 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:16:27.982740    4416 fix.go:56] duration metric: took 18.528625ms for fixHost
	I0925 12:16:27.982757    4416 start.go:83] releasing machines lock for "multinode-761000", held for 18.667917ms
	W0925 12:16:27.982967    4416 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-761000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-761000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:16:27.991335    4416 out.go:201] 
	W0925 12:16:27.995399    4416 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:16:27.995434    4416 out.go:270] * 
	* 
	W0925 12:16:27.997965    4416 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:16:28.005336    4416 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-761000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-761000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000: exit status 7 (32.672375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-761000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-761000 node delete m03: exit status 83 (39.759875ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-761000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-761000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-761000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-761000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-761000 status --alsologtostderr: exit status 7 (29.918125ms)

                                                
                                                
-- stdout --
	multinode-761000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:16:28.188759    4430 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:16:28.188935    4430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:16:28.188940    4430 out.go:358] Setting ErrFile to fd 2...
	I0925 12:16:28.188943    4430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:16:28.189080    4430 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:16:28.189212    4430 out.go:352] Setting JSON to false
	I0925 12:16:28.189222    4430 mustload.go:65] Loading cluster: multinode-761000
	I0925 12:16:28.189286    4430 notify.go:220] Checking for updates...
	I0925 12:16:28.189420    4430 config.go:182] Loaded profile config "multinode-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:16:28.189431    4430 status.go:174] checking status of multinode-761000 ...
	I0925 12:16:28.189683    4430 status.go:364] multinode-761000 host status = "Stopped" (err=<nil>)
	I0925 12:16:28.189686    4430 status.go:377] host is not running, skipping remaining checks
	I0925 12:16:28.189688    4430 status.go:176] multinode-761000 status: &{Name:multinode-761000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-761000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000: exit status 7 (30.037ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-761000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-761000 stop: (3.312735958s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-761000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-761000 status: exit status 7 (62.603709ms)

                                                
                                                
-- stdout --
	multinode-761000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-761000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-761000 status --alsologtostderr: exit status 7 (32.260542ms)

                                                
                                                
-- stdout --
	multinode-761000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:16:31.626993    4454 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:16:31.627123    4454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:16:31.627127    4454 out.go:358] Setting ErrFile to fd 2...
	I0925 12:16:31.627129    4454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:16:31.627255    4454 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:16:31.627374    4454 out.go:352] Setting JSON to false
	I0925 12:16:31.627386    4454 mustload.go:65] Loading cluster: multinode-761000
	I0925 12:16:31.627448    4454 notify.go:220] Checking for updates...
	I0925 12:16:31.627623    4454 config.go:182] Loaded profile config "multinode-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:16:31.627631    4454 status.go:174] checking status of multinode-761000 ...
	I0925 12:16:31.627862    4454 status.go:364] multinode-761000 host status = "Stopped" (err=<nil>)
	I0925 12:16:31.627866    4454 status.go:377] host is not running, skipping remaining checks
	I0925 12:16:31.627868    4454 status.go:176] multinode-761000 status: &{Name:multinode-761000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-761000 status --alsologtostderr": multinode-761000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-761000 status --alsologtostderr": multinode-761000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000: exit status 7 (29.963125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-761000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-761000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.187474541s)

                                                
                                                
-- stdout --
	* [multinode-761000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-761000" primary control-plane node in "multinode-761000" cluster
	* Restarting existing qemu2 VM for "multinode-761000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-761000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:16:31.686821    4458 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:16:31.686974    4458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:16:31.686978    4458 out.go:358] Setting ErrFile to fd 2...
	I0925 12:16:31.686980    4458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:16:31.687101    4458 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:16:31.688121    4458 out.go:352] Setting JSON to false
	I0925 12:16:31.704339    4458 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4562,"bootTime":1727287229,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:16:31.704401    4458 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:16:31.709230    4458 out.go:177] * [multinode-761000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:16:31.717302    4458 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:16:31.717385    4458 notify.go:220] Checking for updates...
	I0925 12:16:31.725019    4458 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:16:31.728089    4458 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:16:31.731174    4458 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:16:31.734180    4458 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:16:31.737125    4458 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:16:31.740485    4458 config.go:182] Loaded profile config "multinode-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:16:31.740740    4458 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:16:31.745138    4458 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 12:16:31.752218    4458 start.go:297] selected driver: qemu2
	I0925 12:16:31.752225    4458 start.go:901] validating driver "qemu2" against &{Name:multinode-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:16:31.752291    4458 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:16:31.754743    4458 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:16:31.754769    4458 cni.go:84] Creating CNI manager for ""
	I0925 12:16:31.754793    4458 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0925 12:16:31.754844    4458 start.go:340] cluster config:
	{Name:multinode-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-761000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:16:31.758527    4458 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:16:31.765147    4458 out.go:177] * Starting "multinode-761000" primary control-plane node in "multinode-761000" cluster
	I0925 12:16:31.769154    4458 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:16:31.769173    4458 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:16:31.769181    4458 cache.go:56] Caching tarball of preloaded images
	I0925 12:16:31.769251    4458 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:16:31.769257    4458 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:16:31.769314    4458 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/multinode-761000/config.json ...
	I0925 12:16:31.769737    4458 start.go:360] acquireMachinesLock for multinode-761000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:16:31.769765    4458 start.go:364] duration metric: took 22.125µs to acquireMachinesLock for "multinode-761000"
	I0925 12:16:31.769775    4458 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:16:31.769781    4458 fix.go:54] fixHost starting: 
	I0925 12:16:31.769902    4458 fix.go:112] recreateIfNeeded on multinode-761000: state=Stopped err=<nil>
	W0925 12:16:31.769911    4458 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:16:31.773138    4458 out.go:177] * Restarting existing qemu2 VM for "multinode-761000" ...
	I0925 12:16:31.781015    4458 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:16:31.781056    4458 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:d9:43:49:4b:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/disk.qcow2
	I0925 12:16:31.783150    4458 main.go:141] libmachine: STDOUT: 
	I0925 12:16:31.783171    4458 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:16:31.783206    4458 fix.go:56] duration metric: took 13.424583ms for fixHost
	I0925 12:16:31.783212    4458 start.go:83] releasing machines lock for "multinode-761000", held for 13.442834ms
	W0925 12:16:31.783219    4458 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:16:31.783254    4458 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:16:31.783259    4458 start.go:729] Will try again in 5 seconds ...
	I0925 12:16:36.784615    4458 start.go:360] acquireMachinesLock for multinode-761000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:16:36.785183    4458 start.go:364] duration metric: took 422.125µs to acquireMachinesLock for "multinode-761000"
	I0925 12:16:36.785326    4458 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:16:36.785349    4458 fix.go:54] fixHost starting: 
	I0925 12:16:36.786150    4458 fix.go:112] recreateIfNeeded on multinode-761000: state=Stopped err=<nil>
	W0925 12:16:36.786176    4458 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:16:36.792443    4458 out.go:177] * Restarting existing qemu2 VM for "multinode-761000" ...
	I0925 12:16:36.799296    4458 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:16:36.799448    4458 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:d9:43:49:4b:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/multinode-761000/disk.qcow2
	I0925 12:16:36.806965    4458 main.go:141] libmachine: STDOUT: 
	I0925 12:16:36.807002    4458 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:16:36.807072    4458 fix.go:56] duration metric: took 21.7255ms for fixHost
	I0925 12:16:36.807086    4458 start.go:83] releasing machines lock for "multinode-761000", held for 21.882875ms
	W0925 12:16:36.807218    4458 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-761000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-761000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:16:36.815305    4458 out.go:201] 
	W0925 12:16:36.818425    4458 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:16:36.818618    4458 out.go:270] * 
	* 
	W0925 12:16:36.821385    4458 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:16:36.829239    4458 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-761000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000: exit status 7 (68.712375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-761000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-761000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-761000-m01 --driver=qemu2 : exit status 80 (9.900355417s)

                                                
                                                
-- stdout --
	* [multinode-761000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-761000-m01" primary control-plane node in "multinode-761000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-761000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-761000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-761000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-761000-m02 --driver=qemu2 : exit status 80 (9.9752085s)

                                                
                                                
-- stdout --
	* [multinode-761000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-761000-m02" primary control-plane node in "multinode-761000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-761000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-761000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-761000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-761000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-761000: exit status 83 (80.21925ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-761000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-761000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-761000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-761000 -n multinode-761000: exit status 7 (30.890167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.10s)

                                                
                                    
x
+
TestPreload (10.24s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-896000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-896000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.088813708s)

                                                
                                                
-- stdout --
	* [test-preload-896000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-896000" primary control-plane node in "test-preload-896000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-896000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:16:57.158491    4513 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:16:57.158608    4513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:16:57.158610    4513 out.go:358] Setting ErrFile to fd 2...
	I0925 12:16:57.158613    4513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:16:57.158728    4513 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:16:57.159759    4513 out.go:352] Setting JSON to false
	I0925 12:16:57.175706    4513 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4588,"bootTime":1727287229,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:16:57.175775    4513 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:16:57.182236    4513 out.go:177] * [test-preload-896000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:16:57.188993    4513 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:16:57.189063    4513 notify.go:220] Checking for updates...
	I0925 12:16:57.196012    4513 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:16:57.198993    4513 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:16:57.202023    4513 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:16:57.204972    4513 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:16:57.208018    4513 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:16:57.211379    4513 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:16:57.211428    4513 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:16:57.214898    4513 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:16:57.222030    4513 start.go:297] selected driver: qemu2
	I0925 12:16:57.222039    4513 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:16:57.222047    4513 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:16:57.224340    4513 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:16:57.225611    4513 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:16:57.228035    4513 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:16:57.228052    4513 cni.go:84] Creating CNI manager for ""
	I0925 12:16:57.228078    4513 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:16:57.228083    4513 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 12:16:57.228116    4513 start.go:340] cluster config:
	{Name:test-preload-896000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:16:57.231693    4513 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:16:57.238959    4513 out.go:177] * Starting "test-preload-896000" primary control-plane node in "test-preload-896000" cluster
	I0925 12:16:57.242992    4513 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0925 12:16:57.243066    4513 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/test-preload-896000/config.json ...
	I0925 12:16:57.243090    4513 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/test-preload-896000/config.json: {Name:mk70905b2a2b9606930e89eb2d1b1e98c4685e9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:16:57.243089    4513 cache.go:107] acquiring lock: {Name:mk273e4e461f6b0311e73b06070cb24e4edfcf62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:16:57.243091    4513 cache.go:107] acquiring lock: {Name:mk6b097bd8811002d5ddc041aee0c9a6907db9c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:16:57.243138    4513 cache.go:107] acquiring lock: {Name:mka76227c556b673a6566e6f2bdf0128bad877a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:16:57.243236    4513 cache.go:107] acquiring lock: {Name:mk1d7e371020dafd2e5be9261a07ce565767dd75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:16:57.243277    4513 cache.go:107] acquiring lock: {Name:mkd7a7bd4d8ec5a425292a318426c1e51ab80ee9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:16:57.243217    4513 cache.go:107] acquiring lock: {Name:mkb954ff50b60a21a609df6f40706d7d4cb49d62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:16:57.243328    4513 cache.go:107] acquiring lock: {Name:mk71a9231d70ec1d465bb1e9a4d860adb7db1327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:16:57.243375    4513 cache.go:107] acquiring lock: {Name:mk5dccfafe3a1d5ec2134b03f99151fa4e51cce1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:16:57.243458    4513 start.go:360] acquireMachinesLock for test-preload-896000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:16:57.243534    4513 start.go:364] duration metric: took 54.875µs to acquireMachinesLock for "test-preload-896000"
	I0925 12:16:57.243570    4513 start.go:93] Provisioning new machine with config: &{Name:test-preload-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:16:57.243645    4513 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:16:57.243651    4513 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0925 12:16:57.243682    4513 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0925 12:16:57.243689    4513 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0925 12:16:57.243789    4513 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0925 12:16:57.243795    4513 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0925 12:16:57.243633    4513 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0925 12:16:57.243852    4513 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0925 12:16:57.244190    4513 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:16:57.248000    4513 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 12:16:57.255791    4513 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0925 12:16:57.256205    4513 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0925 12:16:57.256330    4513 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0925 12:16:57.256384    4513 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0925 12:16:57.256472    4513 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0925 12:16:57.256493    4513 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0925 12:16:57.256631    4513 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0925 12:16:57.257018    4513 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:16:57.265771    4513 start.go:159] libmachine.API.Create for "test-preload-896000" (driver="qemu2")
	I0925 12:16:57.265797    4513 client.go:168] LocalClient.Create starting
	I0925 12:16:57.265878    4513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:16:57.265913    4513 main.go:141] libmachine: Decoding PEM data...
	I0925 12:16:57.265925    4513 main.go:141] libmachine: Parsing certificate...
	I0925 12:16:57.265971    4513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:16:57.265994    4513 main.go:141] libmachine: Decoding PEM data...
	I0925 12:16:57.266004    4513 main.go:141] libmachine: Parsing certificate...
	I0925 12:16:57.266395    4513 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:16:57.431130    4513 main.go:141] libmachine: Creating SSH key...
	I0925 12:16:57.632908    4513 main.go:141] libmachine: Creating Disk image...
	I0925 12:16:57.632930    4513 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:16:57.633133    4513 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/test-preload-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/test-preload-896000/disk.qcow2
	I0925 12:16:57.643340    4513 main.go:141] libmachine: STDOUT: 
	I0925 12:16:57.643366    4513 main.go:141] libmachine: STDERR: 
	I0925 12:16:57.643427    4513 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/test-preload-896000/disk.qcow2 +20000M
	I0925 12:16:57.652144    4513 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:16:57.652166    4513 main.go:141] libmachine: STDERR: 
	I0925 12:16:57.652180    4513 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/test-preload-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/test-preload-896000/disk.qcow2
	I0925 12:16:57.652187    4513 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:16:57.652199    4513 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:16:57.652226    4513 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/test-preload-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/test-preload-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/test-preload-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:60:56:9c:d9:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/test-preload-896000/disk.qcow2
	I0925 12:16:57.653998    4513 main.go:141] libmachine: STDOUT: 
	I0925 12:16:57.654011    4513 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:16:57.654031    4513 client.go:171] duration metric: took 388.236542ms to LocalClient.Create
	I0925 12:16:57.733213    4513 cache.go:162] opening:  /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0925 12:16:57.743720    4513 cache.go:162] opening:  /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0925 12:16:57.758717    4513 cache.go:162] opening:  /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0925 12:16:57.767081    4513 cache.go:162] opening:  /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0925 12:16:57.774374    4513 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0925 12:16:57.774399    4513 cache.go:162] opening:  /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0925 12:16:57.817194    4513 cache.go:162] opening:  /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0925 12:16:57.831112    4513 cache.go:162] opening:  /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0925 12:16:57.959507    4513 cache.go:157] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0925 12:16:57.959560    4513 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 716.352416ms
	I0925 12:16:57.959640    4513 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0925 12:16:58.393782    4513 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0925 12:16:58.393881    4513 cache.go:162] opening:  /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0925 12:16:59.151120    4513 cache.go:157] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0925 12:16:59.151157    4513 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.908108833s
	I0925 12:16:59.151195    4513 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0925 12:16:59.654355    4513 start.go:128] duration metric: took 2.410725959s to createHost
	I0925 12:16:59.654423    4513 start.go:83] releasing machines lock for "test-preload-896000", held for 2.4109145s
	W0925 12:16:59.654472    4513 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:16:59.668848    4513 out.go:177] * Deleting "test-preload-896000" in qemu2 ...
	I0925 12:16:59.688455    4513 cache.go:157] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0925 12:16:59.688512    4513 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.445275833s
	I0925 12:16:59.688540    4513 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	W0925 12:16:59.702563    4513 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:16:59.702589    4513 start.go:729] Will try again in 5 seconds ...
	I0925 12:17:00.201084    4513 cache.go:157] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0925 12:17:00.201136    4513 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.957903792s
	I0925 12:17:00.201164    4513 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0925 12:17:01.599855    4513 cache.go:157] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0925 12:17:01.599903    4513 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.356895s
	I0925 12:17:01.599927    4513 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0925 12:17:01.696943    4513 cache.go:157] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0925 12:17:01.696980    4513 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 4.453698333s
	I0925 12:17:01.697000    4513 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0925 12:17:02.489258    4513 cache.go:157] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0925 12:17:02.489306    4513 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.246284833s
	I0925 12:17:02.489332    4513 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0925 12:17:04.702846    4513 start.go:360] acquireMachinesLock for test-preload-896000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:17:04.703357    4513 start.go:364] duration metric: took 426.208µs to acquireMachinesLock for "test-preload-896000"
	I0925 12:17:04.703506    4513 start.go:93] Provisioning new machine with config: &{Name:test-preload-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:17:04.703723    4513 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:17:04.713314    4513 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 12:17:04.764280    4513 start.go:159] libmachine.API.Create for "test-preload-896000" (driver="qemu2")
	I0925 12:17:04.764348    4513 client.go:168] LocalClient.Create starting
	I0925 12:17:04.764457    4513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:17:04.764527    4513 main.go:141] libmachine: Decoding PEM data...
	I0925 12:17:04.764559    4513 main.go:141] libmachine: Parsing certificate...
	I0925 12:17:04.764645    4513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:17:04.764693    4513 main.go:141] libmachine: Decoding PEM data...
	I0925 12:17:04.764710    4513 main.go:141] libmachine: Parsing certificate...
	I0925 12:17:04.765203    4513 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:17:04.937229    4513 main.go:141] libmachine: Creating SSH key...
	I0925 12:17:05.154299    4513 main.go:141] libmachine: Creating Disk image...
	I0925 12:17:05.154312    4513 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:17:05.154501    4513 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/test-preload-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/test-preload-896000/disk.qcow2
	I0925 12:17:05.163939    4513 main.go:141] libmachine: STDOUT: 
	I0925 12:17:05.163954    4513 main.go:141] libmachine: STDERR: 
	I0925 12:17:05.164010    4513 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/test-preload-896000/disk.qcow2 +20000M
	I0925 12:17:05.172066    4513 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:17:05.172081    4513 main.go:141] libmachine: STDERR: 
	I0925 12:17:05.172101    4513 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/test-preload-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/test-preload-896000/disk.qcow2
	I0925 12:17:05.172105    4513 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:17:05.172117    4513 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:17:05.172171    4513 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/test-preload-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/test-preload-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/test-preload-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:70:7b:b5:98:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/test-preload-896000/disk.qcow2
	I0925 12:17:05.173914    4513 main.go:141] libmachine: STDOUT: 
	I0925 12:17:05.173928    4513 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:17:05.173941    4513 client.go:171] duration metric: took 409.595584ms to LocalClient.Create
	I0925 12:17:07.104050    4513 cache.go:157] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0925 12:17:07.104126    4513 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.86107325s
	I0925 12:17:07.104175    4513 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0925 12:17:07.104238    4513 cache.go:87] Successfully saved all images to host disk.
	I0925 12:17:07.174825    4513 start.go:128] duration metric: took 2.471121834s to createHost
	I0925 12:17:07.174875    4513 start.go:83] releasing machines lock for "test-preload-896000", held for 2.471539583s
	W0925 12:17:07.175181    4513 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:17:07.183671    4513 out.go:201] 
	W0925 12:17:07.192712    4513 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:17:07.192737    4513 out.go:270] * 
	* 
	W0925 12:17:07.195418    4513 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:17:07.204689    4513 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-896000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-25 12:17:07.223007 -0700 PDT m=+2903.311976751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-896000 -n test-preload-896000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-896000 -n test-preload-896000: exit status 7 (64.737334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-896000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-896000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-896000
--- FAIL: TestPreload (10.24s)

                                                
                                    
x
+
TestScheduledStopUnix (10.17s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-794000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-794000 --memory=2048 --driver=qemu2 : exit status 80 (10.019598084s)

                                                
                                                
-- stdout --
	* [scheduled-stop-794000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-794000" primary control-plane node in "scheduled-stop-794000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-794000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-794000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-794000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-794000" primary control-plane node in "scheduled-stop-794000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-794000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-794000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-25 12:17:17.392918 -0700 PDT m=+2913.482075709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-794000 -n scheduled-stop-794000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-794000 -n scheduled-stop-794000: exit status 7 (68.436666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-794000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-794000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-794000
--- FAIL: TestScheduledStopUnix (10.17s)

                                                
                                    
x
+
TestSkaffold (13.05s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe4270167287 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe4270167287 version: (1.065468958s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-439000 --memory=2600 --driver=qemu2 
E0925 12:17:22.593756    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-439000 --memory=2600 --driver=qemu2 : exit status 80 (10.01640275s)

                                                
                                                
-- stdout --
	* [skaffold-439000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-439000" primary control-plane node in "skaffold-439000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-439000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-439000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-439000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-439000" primary control-plane node in "skaffold-439000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-439000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-439000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-25 12:17:30.439728 -0700 PDT m=+2926.529127001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-439000 -n skaffold-439000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-439000 -n skaffold-439000: exit status 7 (63.44225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-439000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-439000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-439000
--- FAIL: TestSkaffold (13.05s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (606.29s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4222508996 start -p running-upgrade-796000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4222508996 start -p running-upgrade-796000 --memory=2200 --vm-driver=qemu2 : (1m3.73118175s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-796000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0925 12:20:41.887849    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-796000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m28.427844458s)

                                                
                                                
-- stdout --
	* [running-upgrade-796000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-796000" primary control-plane node in "running-upgrade-796000" cluster
	* Updating the running qemu2 "running-upgrade-796000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:19:17.836823    4893 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:19:17.836939    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:19:17.836943    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:19:17.836945    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:19:17.837075    4893 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:19:17.838202    4893 out.go:352] Setting JSON to false
	I0925 12:19:17.854937    4893 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4728,"bootTime":1727287229,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:19:17.855002    4893 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:19:17.859823    4893 out.go:177] * [running-upgrade-796000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:19:17.867825    4893 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:19:17.867884    4893 notify.go:220] Checking for updates...
	I0925 12:19:17.875808    4893 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:19:17.879797    4893 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:19:17.882858    4893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:19:17.885919    4893 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:19:17.888834    4893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:19:17.892113    4893 config.go:182] Loaded profile config "running-upgrade-796000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:19:17.895744    4893 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0925 12:19:17.898809    4893 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:19:17.902765    4893 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 12:19:17.909781    4893 start.go:297] selected driver: qemu2
	I0925 12:19:17.909789    4893 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-796000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50275 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-796000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0925 12:19:17.909835    4893 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:19:17.911994    4893 cni.go:84] Creating CNI manager for ""
	I0925 12:19:17.912026    4893 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:19:17.912053    4893 start.go:340] cluster config:
	{Name:running-upgrade-796000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50275 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-796000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0925 12:19:17.912099    4893 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:19:17.919805    4893 out.go:177] * Starting "running-upgrade-796000" primary control-plane node in "running-upgrade-796000" cluster
	I0925 12:19:17.923609    4893 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0925 12:19:17.923622    4893 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0925 12:19:17.923626    4893 cache.go:56] Caching tarball of preloaded images
	I0925 12:19:17.923674    4893 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:19:17.923688    4893 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0925 12:19:17.923742    4893 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/config.json ...
	I0925 12:19:17.924070    4893 start.go:360] acquireMachinesLock for running-upgrade-796000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:19:17.924102    4893 start.go:364] duration metric: took 26µs to acquireMachinesLock for "running-upgrade-796000"
	I0925 12:19:17.924110    4893 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:19:17.924116    4893 fix.go:54] fixHost starting: 
	I0925 12:19:17.924794    4893 fix.go:112] recreateIfNeeded on running-upgrade-796000: state=Running err=<nil>
	W0925 12:19:17.924802    4893 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:19:17.928828    4893 out.go:177] * Updating the running qemu2 "running-upgrade-796000" VM ...
	I0925 12:19:17.935756    4893 machine.go:93] provisionDockerMachine start ...
	I0925 12:19:17.935795    4893 main.go:141] libmachine: Using SSH client type: native
	I0925 12:19:17.935902    4893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104bc1c00] 0x104bc4440 <nil>  [] 0s} localhost 50243 <nil> <nil>}
	I0925 12:19:17.935907    4893 main.go:141] libmachine: About to run SSH command:
	hostname
	I0925 12:19:17.995598    4893 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-796000
	
	I0925 12:19:17.995613    4893 buildroot.go:166] provisioning hostname "running-upgrade-796000"
	I0925 12:19:17.995659    4893 main.go:141] libmachine: Using SSH client type: native
	I0925 12:19:17.995778    4893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104bc1c00] 0x104bc4440 <nil>  [] 0s} localhost 50243 <nil> <nil>}
	I0925 12:19:17.995786    4893 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-796000 && echo "running-upgrade-796000" | sudo tee /etc/hostname
	I0925 12:19:18.055050    4893 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-796000
	
	I0925 12:19:18.055099    4893 main.go:141] libmachine: Using SSH client type: native
	I0925 12:19:18.055207    4893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104bc1c00] 0x104bc4440 <nil>  [] 0s} localhost 50243 <nil> <nil>}
	I0925 12:19:18.055217    4893 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-796000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-796000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-796000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 12:19:18.111502    4893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 12:19:18.111513    4893 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19681-1412/.minikube CaCertPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19681-1412/.minikube}
	I0925 12:19:18.111520    4893 buildroot.go:174] setting up certificates
	I0925 12:19:18.111524    4893 provision.go:84] configureAuth start
	I0925 12:19:18.111530    4893 provision.go:143] copyHostCerts
	I0925 12:19:18.111593    4893 exec_runner.go:144] found /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.pem, removing ...
	I0925 12:19:18.111598    4893 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.pem
	I0925 12:19:18.111723    4893 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.pem (1082 bytes)
	I0925 12:19:18.111901    4893 exec_runner.go:144] found /Users/jenkins/minikube-integration/19681-1412/.minikube/cert.pem, removing ...
	I0925 12:19:18.111907    4893 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19681-1412/.minikube/cert.pem
	I0925 12:19:18.111955    4893 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19681-1412/.minikube/cert.pem (1123 bytes)
	I0925 12:19:18.112062    4893 exec_runner.go:144] found /Users/jenkins/minikube-integration/19681-1412/.minikube/key.pem, removing ...
	I0925 12:19:18.112066    4893 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19681-1412/.minikube/key.pem
	I0925 12:19:18.112119    4893 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19681-1412/.minikube/key.pem (1675 bytes)
	I0925 12:19:18.112220    4893 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-796000 san=[127.0.0.1 localhost minikube running-upgrade-796000]
	I0925 12:19:18.284653    4893 provision.go:177] copyRemoteCerts
	I0925 12:19:18.284701    4893 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 12:19:18.284709    4893 sshutil.go:53] new ssh client: &{IP:localhost Port:50243 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/running-upgrade-796000/id_rsa Username:docker}
	I0925 12:19:18.314243    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0925 12:19:18.321299    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0925 12:19:18.328189    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0925 12:19:18.335227    4893 provision.go:87] duration metric: took 223.698292ms to configureAuth
	I0925 12:19:18.335235    4893 buildroot.go:189] setting minikube options for container-runtime
	I0925 12:19:18.335338    4893 config.go:182] Loaded profile config "running-upgrade-796000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:19:18.335377    4893 main.go:141] libmachine: Using SSH client type: native
	I0925 12:19:18.335468    4893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104bc1c00] 0x104bc4440 <nil>  [] 0s} localhost 50243 <nil> <nil>}
	I0925 12:19:18.335475    4893 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 12:19:18.396245    4893 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 12:19:18.396256    4893 buildroot.go:70] root file system type: tmpfs
	I0925 12:19:18.396305    4893 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 12:19:18.396362    4893 main.go:141] libmachine: Using SSH client type: native
	I0925 12:19:18.396479    4893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104bc1c00] 0x104bc4440 <nil>  [] 0s} localhost 50243 <nil> <nil>}
	I0925 12:19:18.396512    4893 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 12:19:18.454089    4893 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 12:19:18.454150    4893 main.go:141] libmachine: Using SSH client type: native
	I0925 12:19:18.454268    4893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104bc1c00] 0x104bc4440 <nil>  [] 0s} localhost 50243 <nil> <nil>}
	I0925 12:19:18.454276    4893 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 12:19:18.510266    4893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 12:19:18.510280    4893 machine.go:96] duration metric: took 574.529458ms to provisionDockerMachine
	I0925 12:19:18.510285    4893 start.go:293] postStartSetup for "running-upgrade-796000" (driver="qemu2")
	I0925 12:19:18.510292    4893 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 12:19:18.510351    4893 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 12:19:18.510360    4893 sshutil.go:53] new ssh client: &{IP:localhost Port:50243 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/running-upgrade-796000/id_rsa Username:docker}
	I0925 12:19:18.540898    4893 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 12:19:18.542311    4893 info.go:137] Remote host: Buildroot 2021.02.12
	I0925 12:19:18.542319    4893 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19681-1412/.minikube/addons for local assets ...
	I0925 12:19:18.542388    4893 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19681-1412/.minikube/files for local assets ...
	I0925 12:19:18.542477    4893 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19681-1412/.minikube/files/etc/ssl/certs/19342.pem -> 19342.pem in /etc/ssl/certs
	I0925 12:19:18.542575    4893 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0925 12:19:18.545320    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/files/etc/ssl/certs/19342.pem --> /etc/ssl/certs/19342.pem (1708 bytes)
	I0925 12:19:18.552225    4893 start.go:296] duration metric: took 41.935875ms for postStartSetup
	I0925 12:19:18.552238    4893 fix.go:56] duration metric: took 628.135209ms for fixHost
	I0925 12:19:18.552277    4893 main.go:141] libmachine: Using SSH client type: native
	I0925 12:19:18.552383    4893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104bc1c00] 0x104bc4440 <nil>  [] 0s} localhost 50243 <nil> <nil>}
	I0925 12:19:18.552388    4893 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0925 12:19:18.610603    4893 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727291958.279590722
	
	I0925 12:19:18.610612    4893 fix.go:216] guest clock: 1727291958.279590722
	I0925 12:19:18.610616    4893 fix.go:229] Guest: 2024-09-25 12:19:18.279590722 -0700 PDT Remote: 2024-09-25 12:19:18.552239 -0700 PDT m=+0.734425917 (delta=-272.648278ms)
	I0925 12:19:18.610627    4893 fix.go:200] guest clock delta is within tolerance: -272.648278ms
	I0925 12:19:18.610629    4893 start.go:83] releasing machines lock for "running-upgrade-796000", held for 686.536458ms
	I0925 12:19:18.610704    4893 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 12:19:18.610707    4893 ssh_runner.go:195] Run: cat /version.json
	I0925 12:19:18.610713    4893 sshutil.go:53] new ssh client: &{IP:localhost Port:50243 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/running-upgrade-796000/id_rsa Username:docker}
	I0925 12:19:18.610722    4893 sshutil.go:53] new ssh client: &{IP:localhost Port:50243 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/running-upgrade-796000/id_rsa Username:docker}
	W0925 12:19:18.611455    4893 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50243: connect: connection refused
	I0925 12:19:18.611477    4893 retry.go:31] will retry after 214.269531ms: dial tcp [::1]:50243: connect: connection refused
	W0925 12:19:18.639327    4893 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0925 12:19:18.639371    4893 ssh_runner.go:195] Run: systemctl --version
	I0925 12:19:18.641835    4893 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 12:19:18.643422    4893 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 12:19:18.643452    4893 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0925 12:19:18.646249    4893 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0925 12:19:18.650528    4893 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 12:19:18.650535    4893 start.go:495] detecting cgroup driver to use...
	I0925 12:19:18.650595    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 12:19:18.655850    4893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0925 12:19:18.658951    4893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 12:19:18.661966    4893 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 12:19:18.661988    4893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 12:19:18.665103    4893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 12:19:18.668096    4893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 12:19:18.671507    4893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 12:19:18.674618    4893 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 12:19:18.677387    4893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 12:19:18.680198    4893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0925 12:19:18.683560    4893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0925 12:19:18.687030    4893 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 12:19:18.689821    4893 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 12:19:18.692330    4893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:19:18.771526    4893 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 12:19:18.783240    4893 start.go:495] detecting cgroup driver to use...
	I0925 12:19:18.783304    4893 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 12:19:18.788418    4893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 12:19:18.793043    4893 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 12:19:18.802533    4893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 12:19:18.806978    4893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 12:19:18.811723    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 12:19:18.816765    4893 ssh_runner.go:195] Run: which cri-dockerd
	I0925 12:19:18.818127    4893 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 12:19:18.821351    4893 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0925 12:19:18.826047    4893 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 12:19:18.899564    4893 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 12:19:18.980203    4893 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 12:19:18.980259    4893 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0925 12:19:18.986168    4893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:19:19.062356    4893 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 12:19:20.729082    4893 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.666742167s)
	I0925 12:19:20.729157    4893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0925 12:19:20.733592    4893 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0925 12:19:20.739853    4893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0925 12:19:20.744381    4893 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0925 12:19:20.838841    4893 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 12:19:20.926038    4893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:19:21.004982    4893 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0925 12:19:21.011792    4893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0925 12:19:21.016832    4893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:19:21.085210    4893 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0925 12:19:21.128336    4893 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0925 12:19:21.128410    4893 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0925 12:19:21.130343    4893 start.go:563] Will wait 60s for crictl version
	I0925 12:19:21.130378    4893 ssh_runner.go:195] Run: which crictl
	I0925 12:19:21.131764    4893 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 12:19:21.147024    4893 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0925 12:19:21.147107    4893 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 12:19:21.162085    4893 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 12:19:21.182641    4893 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0925 12:19:21.182744    4893 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0925 12:19:21.184165    4893 kubeadm.go:883] updating cluster {Name:running-upgrade-796000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50275 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-796000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0925 12:19:21.184213    4893 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0925 12:19:21.184268    4893 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 12:19:21.194672    4893 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0925 12:19:21.194682    4893 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0925 12:19:21.194733    4893 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 12:19:21.197835    4893 ssh_runner.go:195] Run: which lz4
	I0925 12:19:21.199067    4893 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0925 12:19:21.200288    4893 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0925 12:19:21.200306    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0925 12:19:22.153389    4893 docker.go:649] duration metric: took 954.386167ms to copy over tarball
	I0925 12:19:22.153459    4893 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0925 12:19:23.393353    4893 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.239891166s)
	I0925 12:19:23.393366    4893 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0925 12:19:23.409120    4893 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 12:19:23.412327    4893 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0925 12:19:23.417406    4893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:19:23.484549    4893 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 12:19:24.685825    4893 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.20128325s)
	I0925 12:19:24.685940    4893 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 12:19:24.701986    4893 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0925 12:19:24.701996    4893 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0925 12:19:24.702002    4893 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0925 12:19:24.706074    4893 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:19:24.707996    4893 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0925 12:19:24.710274    4893 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0925 12:19:24.710459    4893 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:19:24.712232    4893 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0925 12:19:24.712222    4893 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0925 12:19:24.714364    4893 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0925 12:19:24.714401    4893 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0925 12:19:24.715619    4893 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0925 12:19:24.715649    4893 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0925 12:19:24.717527    4893 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0925 12:19:24.717627    4893 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0925 12:19:24.718814    4893 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0925 12:19:24.719063    4893 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0925 12:19:24.719976    4893 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0925 12:19:24.720860    4893 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0925 12:19:25.114979    4893 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0925 12:19:25.135166    4893 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0925 12:19:25.135191    4893 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0925 12:19:25.135257    4893 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0925 12:19:25.146089    4893 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0925 12:19:25.146303    4893 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0925 12:19:25.156232    4893 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0925 12:19:25.156250    4893 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0925 12:19:25.156306    4893 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W0925 12:19:25.158542    4893 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0925 12:19:25.158654    4893 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0925 12:19:25.170205    4893 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0925 12:19:25.170343    4893 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0925 12:19:25.171088    4893 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0925 12:19:25.171107    4893 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0925 12:19:25.171153    4893 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0925 12:19:25.172450    4893 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0925 12:19:25.172463    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0925 12:19:25.180170    4893 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0925 12:19:25.180181    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0925 12:19:25.181195    4893 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0925 12:19:25.184311    4893 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0925 12:19:25.184433    4893 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0925 12:19:25.191681    4893 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0925 12:19:25.218459    4893 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0925 12:19:25.218492    4893 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0925 12:19:25.218509    4893 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0925 12:19:25.218509    4893 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0925 12:19:25.218542    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0925 12:19:25.218544    4893 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0925 12:19:25.218570    4893 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0925 12:19:25.218572    4893 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0925 12:19:25.218606    4893 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0925 12:19:25.223109    4893 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0925 12:19:25.242701    4893 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0925 12:19:25.248478    4893 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0925 12:19:25.257258    4893 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0925 12:19:25.261456    4893 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0925 12:19:25.261478    4893 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0925 12:19:25.261550    4893 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0925 12:19:25.281120    4893 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0925 12:19:25.281136    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0925 12:19:25.282865    4893 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0925 12:19:25.282887    4893 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0925 12:19:25.282953    4893 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0925 12:19:25.294586    4893 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0925 12:19:25.294718    4893 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0925 12:19:25.325472    4893 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0925 12:19:25.325503    4893 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0925 12:19:25.325527    4893 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0925 12:19:25.325544    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0925 12:19:25.555707    4893 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0925 12:19:25.555721    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0925 12:19:25.595964    4893 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0925 12:19:25.596134    4893 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:19:25.691066    4893 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0925 12:19:25.691097    4893 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0925 12:19:25.691123    4893 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:19:25.691205    4893 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:19:26.150554    4893 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0925 12:19:26.150859    4893 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0925 12:19:26.154943    4893 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0925 12:19:26.154971    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0925 12:19:26.209820    4893 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0925 12:19:26.209834    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0925 12:19:26.444742    4893 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0925 12:19:26.444781    4893 cache_images.go:92] duration metric: took 1.742804s to LoadCachedImages
	W0925 12:19:26.444816    4893 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0925 12:19:26.444821    4893 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0925 12:19:26.444873    4893 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-796000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-796000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0925 12:19:26.444946    4893 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0925 12:19:26.466357    4893 cni.go:84] Creating CNI manager for ""
	I0925 12:19:26.466369    4893 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:19:26.466378    4893 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0925 12:19:26.466386    4893 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-796000 NodeName:running-upgrade-796000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 12:19:26.466467    4893 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-796000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 12:19:26.466538    4893 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0925 12:19:26.469560    4893 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 12:19:26.469593    4893 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 12:19:26.472232    4893 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0925 12:19:26.476842    4893 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 12:19:26.481860    4893 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0925 12:19:26.486757    4893 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0925 12:19:26.488003    4893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:19:26.551442    4893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0925 12:19:26.557227    4893 certs.go:68] Setting up /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000 for IP: 10.0.2.15
	I0925 12:19:26.557236    4893 certs.go:194] generating shared ca certs ...
	I0925 12:19:26.557245    4893 certs.go:226] acquiring lock for ca certs: {Name:mk58bb807ba332e9ca8b6e9b3a29d33fd7cd9838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:19:26.557401    4893 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.key
	I0925 12:19:26.557444    4893 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/proxy-client-ca.key
	I0925 12:19:26.557451    4893 certs.go:256] generating profile certs ...
	I0925 12:19:26.557511    4893 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/client.key
	I0925 12:19:26.557528    4893 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/apiserver.key.e2d4e31b
	I0925 12:19:26.557544    4893 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/apiserver.crt.e2d4e31b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0925 12:19:26.590017    4893 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/apiserver.crt.e2d4e31b ...
	I0925 12:19:26.590022    4893 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/apiserver.crt.e2d4e31b: {Name:mk07e391306ceb724fd69c4df1dfbe5e0bc1aff8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:19:26.590366    4893 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/apiserver.key.e2d4e31b ...
	I0925 12:19:26.590372    4893 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/apiserver.key.e2d4e31b: {Name:mkf5c62fcfa621fabf7a6807fab6c1e818e781f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:19:26.590520    4893 certs.go:381] copying /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/apiserver.crt.e2d4e31b -> /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/apiserver.crt
	I0925 12:19:26.590671    4893 certs.go:385] copying /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/apiserver.key.e2d4e31b -> /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/apiserver.key
	I0925 12:19:26.590799    4893 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/proxy-client.key
	I0925 12:19:26.590926    4893 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/1934.pem (1338 bytes)
	W0925 12:19:26.590948    4893 certs.go:480] ignoring /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/1934_empty.pem, impossibly tiny 0 bytes
	I0925 12:19:26.590953    4893 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca-key.pem (1679 bytes)
	I0925 12:19:26.590975    4893 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem (1082 bytes)
	I0925 12:19:26.590995    4893 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem (1123 bytes)
	I0925 12:19:26.591017    4893 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/key.pem (1675 bytes)
	I0925 12:19:26.591059    4893 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/files/etc/ssl/certs/19342.pem (1708 bytes)
	I0925 12:19:26.591375    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 12:19:26.599011    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0925 12:19:26.606669    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 12:19:26.613458    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0925 12:19:26.620131    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0925 12:19:26.627717    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0925 12:19:26.635661    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 12:19:26.642890    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0925 12:19:26.650078    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/files/etc/ssl/certs/19342.pem --> /usr/share/ca-certificates/19342.pem (1708 bytes)
	I0925 12:19:26.656997    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 12:19:26.664233    4893 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/1934.pem --> /usr/share/ca-certificates/1934.pem (1338 bytes)
	I0925 12:19:26.671595    4893 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 12:19:26.677129    4893 ssh_runner.go:195] Run: openssl version
	I0925 12:19:26.678904    4893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 12:19:26.681880    4893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 12:19:26.683262    4893 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 25 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0925 12:19:26.683290    4893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 12:19:26.685156    4893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 12:19:26.688164    4893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1934.pem && ln -fs /usr/share/ca-certificates/1934.pem /etc/ssl/certs/1934.pem"
	I0925 12:19:26.691588    4893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1934.pem
	I0925 12:19:26.693119    4893 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 25 18:45 /usr/share/ca-certificates/1934.pem
	I0925 12:19:26.693149    4893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1934.pem
	I0925 12:19:26.694897    4893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1934.pem /etc/ssl/certs/51391683.0"
	I0925 12:19:26.697476    4893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19342.pem && ln -fs /usr/share/ca-certificates/19342.pem /etc/ssl/certs/19342.pem"
	I0925 12:19:26.700439    4893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19342.pem
	I0925 12:19:26.701915    4893 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 25 18:45 /usr/share/ca-certificates/19342.pem
	I0925 12:19:26.701936    4893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19342.pem
	I0925 12:19:26.704914    4893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19342.pem /etc/ssl/certs/3ec20f2e.0"
	I0925 12:19:26.708240    4893 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0925 12:19:26.709834    4893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0925 12:19:26.711849    4893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0925 12:19:26.713565    4893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0925 12:19:26.715233    4893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0925 12:19:26.717221    4893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0925 12:19:26.719082    4893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0925 12:19:26.720991    4893 kubeadm.go:392] StartCluster: {Name:running-upgrade-796000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50275 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-796000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0925 12:19:26.721074    4893 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 12:19:26.731244    4893 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 12:19:26.734635    4893 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0925 12:19:26.734644    4893 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0925 12:19:26.734673    4893 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0925 12:19:26.737474    4893 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0925 12:19:26.737697    4893 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-796000" does not appear in /Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:19:26.737751    4893 kubeconfig.go:62] /Users/jenkins/minikube-integration/19681-1412/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-796000" cluster setting kubeconfig missing "running-upgrade-796000" context setting]
	I0925 12:19:26.737873    4893 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/kubeconfig: {Name:mkc011f0309eba8a9546287478e16310d103c97e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:19:26.739037    4893 kapi.go:59] client config for running-upgrade-796000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/client.key", CAFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10619a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0925 12:19:26.739376    4893 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0925 12:19:26.742624    4893 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-796000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0925 12:19:26.742631    4893 kubeadm.go:1160] stopping kube-system containers ...
	I0925 12:19:26.742684    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 12:19:26.754279    4893 docker.go:483] Stopping containers: [fb5c16c9f9c0 4a21524b590d ed159bb4a9c9 ba08e3d010d2 a6b9bffba162 acf1d589a549 99b08283dd72 d84c769abee1 bfc63bd4a8f0 23af25db94d5 04a0de1bcbb7 d7a34ce71f6c]
	I0925 12:19:26.754371    4893 ssh_runner.go:195] Run: docker stop fb5c16c9f9c0 4a21524b590d ed159bb4a9c9 ba08e3d010d2 a6b9bffba162 acf1d589a549 99b08283dd72 d84c769abee1 bfc63bd4a8f0 23af25db94d5 04a0de1bcbb7 d7a34ce71f6c
	I0925 12:19:26.766148    4893 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0925 12:19:26.848373    4893 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 12:19:26.852312    4893 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 25 19:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 25 19:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 25 19:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Sep 25 19:19 /etc/kubernetes/scheduler.conf
	
	I0925 12:19:26.852348    4893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/admin.conf
	I0925 12:19:26.855581    4893 kubeadm.go:163] "https://control-plane.minikube.internal:50275" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0925 12:19:26.855611    4893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0925 12:19:26.858842    4893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/kubelet.conf
	I0925 12:19:26.861624    4893 kubeadm.go:163] "https://control-plane.minikube.internal:50275" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0925 12:19:26.861653    4893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0925 12:19:26.864510    4893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/controller-manager.conf
	I0925 12:19:26.867470    4893 kubeadm.go:163] "https://control-plane.minikube.internal:50275" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0925 12:19:26.867496    4893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0925 12:19:26.870433    4893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/scheduler.conf
	I0925 12:19:26.873001    4893 kubeadm.go:163] "https://control-plane.minikube.internal:50275" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0925 12:19:26.873028    4893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0925 12:19:26.876143    4893 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 12:19:26.879350    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 12:19:26.912491    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 12:19:27.895058    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0925 12:19:28.095954    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 12:19:28.126501    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0925 12:19:28.149689    4893 api_server.go:52] waiting for apiserver process to appear ...
	I0925 12:19:28.149771    4893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 12:19:28.651991    4893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 12:19:29.151792    4893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 12:19:29.651887    4893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 12:19:29.661404    4893 api_server.go:72] duration metric: took 1.511745s to wait for apiserver process to appear ...
	I0925 12:19:29.661414    4893 api_server.go:88] waiting for apiserver healthz status ...
	I0925 12:19:29.661425    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:19:34.662181    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:19:34.662287    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:19:39.663125    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:19:39.663225    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:19:44.663891    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:19:44.663910    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:19:49.664350    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:19:49.664459    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:19:54.665605    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:19:54.665709    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:19:59.667272    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:19:59.667365    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:20:04.669201    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:20:04.669274    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:20:09.671373    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:20:09.671425    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:20:14.672782    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:20:14.672807    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:20:19.675074    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:20:19.675170    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:20:24.678025    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:20:24.678126    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:20:29.680786    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:20:29.681325    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:20:29.724364    4893 logs.go:276] 2 containers: [b00200b3d354 a6b9bffba162]
	I0925 12:20:29.724533    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:20:29.745320    4893 logs.go:276] 2 containers: [b2d5ac54fa16 d84c769abee1]
	I0925 12:20:29.745458    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:20:29.759953    4893 logs.go:276] 1 containers: [51aa10e3317b]
	I0925 12:20:29.760048    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:20:29.772627    4893 logs.go:276] 2 containers: [f89f353ba7ff bfc63bd4a8f0]
	I0925 12:20:29.772713    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:20:29.783203    4893 logs.go:276] 1 containers: [da185f1cc653]
	I0925 12:20:29.783299    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:20:29.793416    4893 logs.go:276] 2 containers: [14ba24e33617 acf1d589a549]
	I0925 12:20:29.793507    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:20:29.804195    4893 logs.go:276] 0 containers: []
	W0925 12:20:29.804207    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:20:29.804287    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:20:29.814348    4893 logs.go:276] 1 containers: [797676f920e0]
	I0925 12:20:29.814365    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:20:29.814370    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:20:29.849823    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:20:29.849914    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:20:29.850383    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:20:29.850389    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:20:29.854546    4893 logs.go:123] Gathering logs for etcd [d84c769abee1] ...
	I0925 12:20:29.854553    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d84c769abee1"
	I0925 12:20:29.869927    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:20:29.869935    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:20:29.883105    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:20:29.883122    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:20:29.951640    4893 logs.go:123] Gathering logs for kube-apiserver [a6b9bffba162] ...
	I0925 12:20:29.951651    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6b9bffba162"
	I0925 12:20:29.972509    4893 logs.go:123] Gathering logs for etcd [b2d5ac54fa16] ...
	I0925 12:20:29.972520    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d5ac54fa16"
	I0925 12:20:29.986943    4893 logs.go:123] Gathering logs for coredns [51aa10e3317b] ...
	I0925 12:20:29.986952    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51aa10e3317b"
	I0925 12:20:29.999227    4893 logs.go:123] Gathering logs for kube-scheduler [f89f353ba7ff] ...
	I0925 12:20:29.999241    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f89f353ba7ff"
	I0925 12:20:30.010936    4893 logs.go:123] Gathering logs for kube-apiserver [b00200b3d354] ...
	I0925 12:20:30.010944    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b00200b3d354"
	I0925 12:20:30.024797    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:20:30.024806    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:20:30.051220    4893 logs.go:123] Gathering logs for kube-scheduler [bfc63bd4a8f0] ...
	I0925 12:20:30.051225    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfc63bd4a8f0"
	I0925 12:20:30.066432    4893 logs.go:123] Gathering logs for kube-proxy [da185f1cc653] ...
	I0925 12:20:30.066444    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da185f1cc653"
	I0925 12:20:30.078337    4893 logs.go:123] Gathering logs for kube-controller-manager [14ba24e33617] ...
	I0925 12:20:30.078345    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ba24e33617"
	I0925 12:20:30.099893    4893 logs.go:123] Gathering logs for kube-controller-manager [acf1d589a549] ...
	I0925 12:20:30.099902    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf1d589a549"
	I0925 12:20:30.114942    4893 logs.go:123] Gathering logs for storage-provisioner [797676f920e0] ...
	I0925 12:20:30.114951    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 797676f920e0"
	I0925 12:20:30.126166    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:20:30.126184    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:20:30.126210    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:20:30.126216    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:20:30.126220    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:20:30.126227    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:20:30.126230    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:20:40.130323    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:20:45.132721    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:20:45.133287    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:20:45.170693    4893 logs.go:276] 2 containers: [b00200b3d354 a6b9bffba162]
	I0925 12:20:45.170854    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:20:45.191980    4893 logs.go:276] 2 containers: [b2d5ac54fa16 d84c769abee1]
	I0925 12:20:45.192124    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:20:45.206934    4893 logs.go:276] 1 containers: [51aa10e3317b]
	I0925 12:20:45.207028    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:20:45.219237    4893 logs.go:276] 2 containers: [f89f353ba7ff bfc63bd4a8f0]
	I0925 12:20:45.219310    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:20:45.240393    4893 logs.go:276] 1 containers: [da185f1cc653]
	I0925 12:20:45.240480    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:20:45.259751    4893 logs.go:276] 2 containers: [14ba24e33617 acf1d589a549]
	I0925 12:20:45.259838    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:20:45.272689    4893 logs.go:276] 0 containers: []
	W0925 12:20:45.272700    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:20:45.272767    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:20:45.283731    4893 logs.go:276] 1 containers: [797676f920e0]
	I0925 12:20:45.283749    4893 logs.go:123] Gathering logs for kube-scheduler [f89f353ba7ff] ...
	I0925 12:20:45.283754    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f89f353ba7ff"
	I0925 12:20:45.295811    4893 logs.go:123] Gathering logs for coredns [51aa10e3317b] ...
	I0925 12:20:45.295824    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51aa10e3317b"
	I0925 12:20:45.307037    4893 logs.go:123] Gathering logs for kube-controller-manager [acf1d589a549] ...
	I0925 12:20:45.307049    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf1d589a549"
	I0925 12:20:45.321046    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:20:45.321058    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:20:45.332694    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:20:45.332704    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:20:45.337259    4893 logs.go:123] Gathering logs for kube-apiserver [b00200b3d354] ...
	I0925 12:20:45.337269    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b00200b3d354"
	I0925 12:20:45.351426    4893 logs.go:123] Gathering logs for etcd [b2d5ac54fa16] ...
	I0925 12:20:45.351440    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d5ac54fa16"
	I0925 12:20:45.366015    4893 logs.go:123] Gathering logs for etcd [d84c769abee1] ...
	I0925 12:20:45.366024    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d84c769abee1"
	I0925 12:20:45.380568    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:20:45.380579    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:20:45.415641    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:20:45.415732    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:20:45.416201    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:20:45.416205    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:20:45.450326    4893 logs.go:123] Gathering logs for kube-proxy [da185f1cc653] ...
	I0925 12:20:45.450339    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da185f1cc653"
	I0925 12:20:45.462361    4893 logs.go:123] Gathering logs for kube-controller-manager [14ba24e33617] ...
	I0925 12:20:45.462373    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ba24e33617"
	I0925 12:20:45.479502    4893 logs.go:123] Gathering logs for kube-apiserver [a6b9bffba162] ...
	I0925 12:20:45.479511    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6b9bffba162"
	I0925 12:20:45.498014    4893 logs.go:123] Gathering logs for kube-scheduler [bfc63bd4a8f0] ...
	I0925 12:20:45.498027    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfc63bd4a8f0"
	I0925 12:20:45.516950    4893 logs.go:123] Gathering logs for storage-provisioner [797676f920e0] ...
	I0925 12:20:45.516960    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 797676f920e0"
	I0925 12:20:45.529351    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:20:45.529365    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:20:45.553501    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:20:45.553511    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:20:45.553533    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:20:45.553537    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:20:45.553540    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:20:45.553582    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:20:45.553585    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:20:55.557766    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:21:00.560531    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:21:00.561117    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:21:00.598579    4893 logs.go:276] 2 containers: [b00200b3d354 a6b9bffba162]
	I0925 12:21:00.598777    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:21:00.619198    4893 logs.go:276] 2 containers: [b2d5ac54fa16 d84c769abee1]
	I0925 12:21:00.619342    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:21:00.635213    4893 logs.go:276] 1 containers: [51aa10e3317b]
	I0925 12:21:00.635319    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:21:00.647705    4893 logs.go:276] 2 containers: [f89f353ba7ff bfc63bd4a8f0]
	I0925 12:21:00.647793    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:21:00.661529    4893 logs.go:276] 1 containers: [da185f1cc653]
	I0925 12:21:00.661611    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:21:00.672138    4893 logs.go:276] 2 containers: [14ba24e33617 acf1d589a549]
	I0925 12:21:00.672216    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:21:00.682568    4893 logs.go:276] 0 containers: []
	W0925 12:21:00.682581    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:21:00.682650    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:21:00.693316    4893 logs.go:276] 1 containers: [797676f920e0]
	I0925 12:21:00.693336    4893 logs.go:123] Gathering logs for kube-apiserver [a6b9bffba162] ...
	I0925 12:21:00.693341    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6b9bffba162"
	I0925 12:21:00.711870    4893 logs.go:123] Gathering logs for kube-proxy [da185f1cc653] ...
	I0925 12:21:00.711879    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da185f1cc653"
	I0925 12:21:00.725793    4893 logs.go:123] Gathering logs for kube-controller-manager [14ba24e33617] ...
	I0925 12:21:00.725805    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ba24e33617"
	I0925 12:21:00.743085    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:21:00.743095    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:21:00.780585    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:21:00.780677    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:21:00.781139    4893 logs.go:123] Gathering logs for etcd [b2d5ac54fa16] ...
	I0925 12:21:00.781143    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d5ac54fa16"
	I0925 12:21:00.794905    4893 logs.go:123] Gathering logs for coredns [51aa10e3317b] ...
	I0925 12:21:00.794915    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51aa10e3317b"
	I0925 12:21:00.809345    4893 logs.go:123] Gathering logs for kube-scheduler [f89f353ba7ff] ...
	I0925 12:21:00.809355    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f89f353ba7ff"
	I0925 12:21:00.821043    4893 logs.go:123] Gathering logs for kube-controller-manager [acf1d589a549] ...
	I0925 12:21:00.821053    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf1d589a549"
	I0925 12:21:00.835545    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:21:00.835553    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:21:00.839855    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:21:00.839864    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:21:00.878513    4893 logs.go:123] Gathering logs for kube-apiserver [b00200b3d354] ...
	I0925 12:21:00.878521    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b00200b3d354"
	I0925 12:21:00.892671    4893 logs.go:123] Gathering logs for etcd [d84c769abee1] ...
	I0925 12:21:00.892682    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d84c769abee1"
	I0925 12:21:00.906967    4893 logs.go:123] Gathering logs for kube-scheduler [bfc63bd4a8f0] ...
	I0925 12:21:00.906976    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfc63bd4a8f0"
	I0925 12:21:00.921843    4893 logs.go:123] Gathering logs for storage-provisioner [797676f920e0] ...
	I0925 12:21:00.921852    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 797676f920e0"
	I0925 12:21:00.933075    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:21:00.933083    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:21:00.957192    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:21:00.957203    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:21:00.968963    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:21:00.968973    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:21:00.969002    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:21:00.969006    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:21:00.969009    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:21:00.969013    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:21:00.969015    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:21:10.973072    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:21:15.975866    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:21:15.976404    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:21:16.017552    4893 logs.go:276] 2 containers: [b00200b3d354 a6b9bffba162]
	I0925 12:21:16.017709    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:21:16.039774    4893 logs.go:276] 2 containers: [b2d5ac54fa16 d84c769abee1]
	I0925 12:21:16.039915    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:21:16.055595    4893 logs.go:276] 1 containers: [51aa10e3317b]
	I0925 12:21:16.055680    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:21:16.067733    4893 logs.go:276] 2 containers: [f89f353ba7ff bfc63bd4a8f0]
	I0925 12:21:16.067804    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:21:16.078263    4893 logs.go:276] 1 containers: [da185f1cc653]
	I0925 12:21:16.078337    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:21:16.089352    4893 logs.go:276] 2 containers: [14ba24e33617 acf1d589a549]
	I0925 12:21:16.089429    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:21:16.100294    4893 logs.go:276] 0 containers: []
	W0925 12:21:16.100305    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:21:16.100373    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:21:16.110605    4893 logs.go:276] 1 containers: [797676f920e0]
	I0925 12:21:16.110624    4893 logs.go:123] Gathering logs for kube-controller-manager [14ba24e33617] ...
	I0925 12:21:16.110629    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ba24e33617"
	I0925 12:21:16.128477    4893 logs.go:123] Gathering logs for storage-provisioner [797676f920e0] ...
	I0925 12:21:16.128487    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 797676f920e0"
	I0925 12:21:16.139586    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:21:16.139596    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:21:16.165255    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:21:16.165262    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:21:16.201787    4893 logs.go:123] Gathering logs for kube-apiserver [a6b9bffba162] ...
	I0925 12:21:16.201799    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6b9bffba162"
	I0925 12:21:16.220497    4893 logs.go:123] Gathering logs for etcd [b2d5ac54fa16] ...
	I0925 12:21:16.220506    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d5ac54fa16"
	I0925 12:21:16.234612    4893 logs.go:123] Gathering logs for etcd [d84c769abee1] ...
	I0925 12:21:16.234623    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d84c769abee1"
	I0925 12:21:16.248959    4893 logs.go:123] Gathering logs for kube-controller-manager [acf1d589a549] ...
	I0925 12:21:16.248968    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf1d589a549"
	I0925 12:21:16.262935    4893 logs.go:123] Gathering logs for kube-apiserver [b00200b3d354] ...
	I0925 12:21:16.262948    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b00200b3d354"
	I0925 12:21:16.279438    4893 logs.go:123] Gathering logs for coredns [51aa10e3317b] ...
	I0925 12:21:16.279453    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51aa10e3317b"
	I0925 12:21:16.291134    4893 logs.go:123] Gathering logs for kube-scheduler [f89f353ba7ff] ...
	I0925 12:21:16.291144    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f89f353ba7ff"
	I0925 12:21:16.302801    4893 logs.go:123] Gathering logs for kube-proxy [da185f1cc653] ...
	I0925 12:21:16.302812    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da185f1cc653"
	I0925 12:21:16.314516    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:21:16.314529    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:21:16.350577    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:21:16.350673    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:21:16.351133    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:21:16.351140    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:21:16.355639    4893 logs.go:123] Gathering logs for kube-scheduler [bfc63bd4a8f0] ...
	I0925 12:21:16.355647    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfc63bd4a8f0"
	I0925 12:21:16.370593    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:21:16.370604    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:21:16.382933    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:21:16.382943    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:21:16.382970    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:21:16.382974    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:21:16.382978    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:21:16.382981    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:21:16.382984    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:21:26.385162    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:21:31.387745    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:21:31.388041    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:21:31.415899    4893 logs.go:276] 2 containers: [b00200b3d354 a6b9bffba162]
	I0925 12:21:31.416038    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:21:31.432292    4893 logs.go:276] 2 containers: [b2d5ac54fa16 d84c769abee1]
	I0925 12:21:31.432389    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:21:31.445774    4893 logs.go:276] 1 containers: [51aa10e3317b]
	I0925 12:21:31.445865    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:21:31.464228    4893 logs.go:276] 2 containers: [f89f353ba7ff bfc63bd4a8f0]
	I0925 12:21:31.464308    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:21:31.478725    4893 logs.go:276] 1 containers: [da185f1cc653]
	I0925 12:21:31.478807    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:21:31.489840    4893 logs.go:276] 2 containers: [14ba24e33617 acf1d589a549]
	I0925 12:21:31.489918    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:21:31.500544    4893 logs.go:276] 0 containers: []
	W0925 12:21:31.500557    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:21:31.500624    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:21:31.511438    4893 logs.go:276] 1 containers: [797676f920e0]
	I0925 12:21:31.511456    4893 logs.go:123] Gathering logs for kube-apiserver [b00200b3d354] ...
	I0925 12:21:31.511463    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b00200b3d354"
	I0925 12:21:31.526904    4893 logs.go:123] Gathering logs for etcd [d84c769abee1] ...
	I0925 12:21:31.526914    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d84c769abee1"
	I0925 12:21:31.542189    4893 logs.go:123] Gathering logs for coredns [51aa10e3317b] ...
	I0925 12:21:31.542199    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51aa10e3317b"
	I0925 12:21:31.553859    4893 logs.go:123] Gathering logs for kube-controller-manager [14ba24e33617] ...
	I0925 12:21:31.553868    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ba24e33617"
	I0925 12:21:31.576941    4893 logs.go:123] Gathering logs for storage-provisioner [797676f920e0] ...
	I0925 12:21:31.576951    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 797676f920e0"
	I0925 12:21:31.589015    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:21:31.589026    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:21:31.614238    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:21:31.614250    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:21:31.625866    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:21:31.625878    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:21:31.630656    4893 logs.go:123] Gathering logs for kube-apiserver [a6b9bffba162] ...
	I0925 12:21:31.630664    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6b9bffba162"
	I0925 12:21:31.653781    4893 logs.go:123] Gathering logs for kube-scheduler [f89f353ba7ff] ...
	I0925 12:21:31.653792    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f89f353ba7ff"
	I0925 12:21:31.665327    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:21:31.665337    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:21:31.700792    4893 logs.go:123] Gathering logs for kube-proxy [da185f1cc653] ...
	I0925 12:21:31.700802    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da185f1cc653"
	I0925 12:21:31.712950    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:21:31.712960    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:21:31.750764    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:21:31.750860    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:21:31.751352    4893 logs.go:123] Gathering logs for kube-scheduler [bfc63bd4a8f0] ...
	I0925 12:21:31.751359    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfc63bd4a8f0"
	I0925 12:21:31.770533    4893 logs.go:123] Gathering logs for kube-controller-manager [acf1d589a549] ...
	I0925 12:21:31.770551    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf1d589a549"
	I0925 12:21:31.785676    4893 logs.go:123] Gathering logs for etcd [b2d5ac54fa16] ...
	I0925 12:21:31.785687    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d5ac54fa16"
	I0925 12:21:31.800168    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:21:31.800183    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:21:31.800215    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:21:31.800219    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:21:31.800222    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:21:31.800225    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:21:31.800228    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:21:41.804136    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:21:46.806381    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:21:46.806995    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:21:46.848544    4893 logs.go:276] 2 containers: [b00200b3d354 a6b9bffba162]
	I0925 12:21:46.848719    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:21:46.869851    4893 logs.go:276] 2 containers: [b2d5ac54fa16 d84c769abee1]
	I0925 12:21:46.869973    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:21:46.886876    4893 logs.go:276] 1 containers: [51aa10e3317b]
	I0925 12:21:46.886961    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:21:46.899288    4893 logs.go:276] 2 containers: [f89f353ba7ff bfc63bd4a8f0]
	I0925 12:21:46.899380    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:21:46.910202    4893 logs.go:276] 1 containers: [da185f1cc653]
	I0925 12:21:46.910289    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:21:46.921403    4893 logs.go:276] 2 containers: [14ba24e33617 acf1d589a549]
	I0925 12:21:46.921491    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:21:46.931989    4893 logs.go:276] 0 containers: []
	W0925 12:21:46.932001    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:21:46.932077    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:21:46.943020    4893 logs.go:276] 1 containers: [797676f920e0]
	I0925 12:21:46.943041    4893 logs.go:123] Gathering logs for kube-controller-manager [acf1d589a549] ...
	I0925 12:21:46.943046    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf1d589a549"
	I0925 12:21:46.961108    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:21:46.961118    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:21:47.003687    4893 logs.go:123] Gathering logs for coredns [51aa10e3317b] ...
	I0925 12:21:47.003697    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51aa10e3317b"
	I0925 12:21:47.015016    4893 logs.go:123] Gathering logs for kube-scheduler [f89f353ba7ff] ...
	I0925 12:21:47.015027    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f89f353ba7ff"
	I0925 12:21:47.027194    4893 logs.go:123] Gathering logs for kube-scheduler [bfc63bd4a8f0] ...
	I0925 12:21:47.027209    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfc63bd4a8f0"
	I0925 12:21:47.041774    4893 logs.go:123] Gathering logs for kube-apiserver [a6b9bffba162] ...
	I0925 12:21:47.041784    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6b9bffba162"
	I0925 12:21:47.060923    4893 logs.go:123] Gathering logs for etcd [b2d5ac54fa16] ...
	I0925 12:21:47.060932    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d5ac54fa16"
	I0925 12:21:47.074760    4893 logs.go:123] Gathering logs for kube-proxy [da185f1cc653] ...
	I0925 12:21:47.074769    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da185f1cc653"
	I0925 12:21:47.089589    4893 logs.go:123] Gathering logs for kube-controller-manager [14ba24e33617] ...
	I0925 12:21:47.089599    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ba24e33617"
	I0925 12:21:47.107248    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:21:47.107257    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:21:47.111643    4893 logs.go:123] Gathering logs for storage-provisioner [797676f920e0] ...
	I0925 12:21:47.111652    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 797676f920e0"
	I0925 12:21:47.124885    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:21:47.124896    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:21:47.137208    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:21:47.137216    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:21:47.171863    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:21:47.171955    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:21:47.172426    4893 logs.go:123] Gathering logs for kube-apiserver [b00200b3d354] ...
	I0925 12:21:47.172431    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b00200b3d354"
	I0925 12:21:47.186254    4893 logs.go:123] Gathering logs for etcd [d84c769abee1] ...
	I0925 12:21:47.186265    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d84c769abee1"
	I0925 12:21:47.201165    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:21:47.201174    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:21:47.226209    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:21:47.226217    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:21:47.226245    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:21:47.226250    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:21:47.226253    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:21:47.226271    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:21:47.226274    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:21:57.229910    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:22:02.232075    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:22:02.232269    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:22:02.249725    4893 logs.go:276] 2 containers: [b00200b3d354 a6b9bffba162]
	I0925 12:22:02.249827    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:22:02.262977    4893 logs.go:276] 2 containers: [b2d5ac54fa16 d84c769abee1]
	I0925 12:22:02.263066    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:22:02.274693    4893 logs.go:276] 1 containers: [51aa10e3317b]
	I0925 12:22:02.274780    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:22:02.285647    4893 logs.go:276] 2 containers: [f89f353ba7ff bfc63bd4a8f0]
	I0925 12:22:02.285759    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:22:02.313599    4893 logs.go:276] 1 containers: [da185f1cc653]
	I0925 12:22:02.313678    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:22:02.325035    4893 logs.go:276] 2 containers: [14ba24e33617 acf1d589a549]
	I0925 12:22:02.325112    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:22:02.336295    4893 logs.go:276] 0 containers: []
	W0925 12:22:02.336305    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:22:02.336372    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:22:02.348344    4893 logs.go:276] 1 containers: [797676f920e0]
	I0925 12:22:02.348364    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:22:02.348370    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:22:02.352615    4893 logs.go:123] Gathering logs for etcd [d84c769abee1] ...
	I0925 12:22:02.352622    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d84c769abee1"
	I0925 12:22:02.367190    4893 logs.go:123] Gathering logs for coredns [51aa10e3317b] ...
	I0925 12:22:02.367203    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51aa10e3317b"
	I0925 12:22:02.384364    4893 logs.go:123] Gathering logs for kube-scheduler [f89f353ba7ff] ...
	I0925 12:22:02.384375    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f89f353ba7ff"
	I0925 12:22:02.399474    4893 logs.go:123] Gathering logs for kube-controller-manager [acf1d589a549] ...
	I0925 12:22:02.399484    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf1d589a549"
	I0925 12:22:02.413687    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:22:02.413698    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:22:02.425400    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:22:02.425439    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:22:02.461704    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:22:02.461720    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:22:02.498781    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:22:02.498876    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:22:02.499361    4893 logs.go:123] Gathering logs for etcd [b2d5ac54fa16] ...
	I0925 12:22:02.499366    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d5ac54fa16"
	I0925 12:22:02.513518    4893 logs.go:123] Gathering logs for kube-proxy [da185f1cc653] ...
	I0925 12:22:02.513531    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da185f1cc653"
	I0925 12:22:02.531200    4893 logs.go:123] Gathering logs for storage-provisioner [797676f920e0] ...
	I0925 12:22:02.531211    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 797676f920e0"
	I0925 12:22:02.546873    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:22:02.546887    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:22:02.571783    4893 logs.go:123] Gathering logs for kube-apiserver [b00200b3d354] ...
	I0925 12:22:02.571790    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b00200b3d354"
	I0925 12:22:02.586540    4893 logs.go:123] Gathering logs for kube-apiserver [a6b9bffba162] ...
	I0925 12:22:02.586550    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6b9bffba162"
	I0925 12:22:02.606086    4893 logs.go:123] Gathering logs for kube-scheduler [bfc63bd4a8f0] ...
	I0925 12:22:02.606102    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfc63bd4a8f0"
	I0925 12:22:02.620713    4893 logs.go:123] Gathering logs for kube-controller-manager [14ba24e33617] ...
	I0925 12:22:02.620724    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ba24e33617"
	I0925 12:22:02.637804    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:22:02.637818    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:22:02.637856    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:22:02.637861    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:22:02.637864    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:22:02.637868    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:22:02.637870    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:22:12.639938    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:22:17.642146    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:22:17.642590    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:22:17.676390    4893 logs.go:276] 2 containers: [b00200b3d354 a6b9bffba162]
	I0925 12:22:17.676571    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:22:17.696612    4893 logs.go:276] 2 containers: [b2d5ac54fa16 d84c769abee1]
	I0925 12:22:17.696750    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:22:17.710766    4893 logs.go:276] 1 containers: [51aa10e3317b]
	I0925 12:22:17.710864    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:22:17.723204    4893 logs.go:276] 2 containers: [f89f353ba7ff bfc63bd4a8f0]
	I0925 12:22:17.723289    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:22:17.733744    4893 logs.go:276] 1 containers: [da185f1cc653]
	I0925 12:22:17.733817    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:22:17.751429    4893 logs.go:276] 2 containers: [14ba24e33617 acf1d589a549]
	I0925 12:22:17.751504    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:22:17.769932    4893 logs.go:276] 0 containers: []
	W0925 12:22:17.769946    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:22:17.770008    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:22:17.781228    4893 logs.go:276] 1 containers: [797676f920e0]
	I0925 12:22:17.781244    4893 logs.go:123] Gathering logs for kube-proxy [da185f1cc653] ...
	I0925 12:22:17.781249    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da185f1cc653"
	I0925 12:22:17.796504    4893 logs.go:123] Gathering logs for etcd [b2d5ac54fa16] ...
	I0925 12:22:17.796515    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d5ac54fa16"
	I0925 12:22:17.811105    4893 logs.go:123] Gathering logs for etcd [d84c769abee1] ...
	I0925 12:22:17.811115    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d84c769abee1"
	I0925 12:22:17.826098    4893 logs.go:123] Gathering logs for kube-controller-manager [acf1d589a549] ...
	I0925 12:22:17.826110    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf1d589a549"
	I0925 12:22:17.840115    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:22:17.840125    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:22:17.875324    4893 logs.go:123] Gathering logs for kube-apiserver [a6b9bffba162] ...
	I0925 12:22:17.875335    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6b9bffba162"
	I0925 12:22:17.895391    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:22:17.895404    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:22:17.907600    4893 logs.go:123] Gathering logs for storage-provisioner [797676f920e0] ...
	I0925 12:22:17.907612    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 797676f920e0"
	I0925 12:22:17.919414    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:22:17.919423    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:22:17.943643    4893 logs.go:123] Gathering logs for kube-apiserver [b00200b3d354] ...
	I0925 12:22:17.943662    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b00200b3d354"
	I0925 12:22:17.958989    4893 logs.go:123] Gathering logs for coredns [51aa10e3317b] ...
	I0925 12:22:17.959002    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51aa10e3317b"
	I0925 12:22:17.974759    4893 logs.go:123] Gathering logs for kube-scheduler [f89f353ba7ff] ...
	I0925 12:22:17.974769    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f89f353ba7ff"
	I0925 12:22:17.987409    4893 logs.go:123] Gathering logs for kube-scheduler [bfc63bd4a8f0] ...
	I0925 12:22:17.987421    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfc63bd4a8f0"
	I0925 12:22:18.006728    4893 logs.go:123] Gathering logs for kube-controller-manager [14ba24e33617] ...
	I0925 12:22:18.006741    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ba24e33617"
	I0925 12:22:18.025162    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:22:18.025172    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:22:18.062675    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:22:18.062767    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:22:18.063227    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:22:18.063232    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:22:18.067536    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:22:18.067544    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:22:18.067568    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:22:18.067573    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:22:18.067576    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:22:18.067580    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:22:18.067583    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:22:28.071584    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:22:33.073809    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:22:33.074006    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:22:33.085870    4893 logs.go:276] 2 containers: [b00200b3d354 a6b9bffba162]
	I0925 12:22:33.085959    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:22:33.098345    4893 logs.go:276] 2 containers: [b2d5ac54fa16 d84c769abee1]
	I0925 12:22:33.098441    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:22:33.112457    4893 logs.go:276] 1 containers: [51aa10e3317b]
	I0925 12:22:33.112533    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:22:33.123170    4893 logs.go:276] 2 containers: [f89f353ba7ff bfc63bd4a8f0]
	I0925 12:22:33.123262    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:22:33.133878    4893 logs.go:276] 1 containers: [da185f1cc653]
	I0925 12:22:33.133962    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:22:33.144480    4893 logs.go:276] 2 containers: [14ba24e33617 acf1d589a549]
	I0925 12:22:33.144576    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:22:33.154916    4893 logs.go:276] 0 containers: []
	W0925 12:22:33.154929    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:22:33.155011    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:22:33.166151    4893 logs.go:276] 1 containers: [797676f920e0]
	I0925 12:22:33.166169    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:22:33.166174    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:22:33.202187    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:22:33.202280    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:22:33.202752    4893 logs.go:123] Gathering logs for kube-scheduler [bfc63bd4a8f0] ...
	I0925 12:22:33.202757    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfc63bd4a8f0"
	I0925 12:22:33.218076    4893 logs.go:123] Gathering logs for kube-proxy [da185f1cc653] ...
	I0925 12:22:33.218087    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da185f1cc653"
	I0925 12:22:33.238802    4893 logs.go:123] Gathering logs for kube-controller-manager [14ba24e33617] ...
	I0925 12:22:33.238814    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ba24e33617"
	I0925 12:22:33.256488    4893 logs.go:123] Gathering logs for storage-provisioner [797676f920e0] ...
	I0925 12:22:33.256498    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 797676f920e0"
	I0925 12:22:33.268246    4893 logs.go:123] Gathering logs for coredns [51aa10e3317b] ...
	I0925 12:22:33.268259    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51aa10e3317b"
	I0925 12:22:33.279684    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:22:33.279695    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:22:33.283969    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:22:33.283975    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:22:33.319168    4893 logs.go:123] Gathering logs for kube-apiserver [a6b9bffba162] ...
	I0925 12:22:33.319180    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6b9bffba162"
	I0925 12:22:33.338560    4893 logs.go:123] Gathering logs for etcd [b2d5ac54fa16] ...
	I0925 12:22:33.338575    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d5ac54fa16"
	I0925 12:22:33.352850    4893 logs.go:123] Gathering logs for kube-scheduler [f89f353ba7ff] ...
	I0925 12:22:33.352865    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f89f353ba7ff"
	I0925 12:22:33.366228    4893 logs.go:123] Gathering logs for kube-controller-manager [acf1d589a549] ...
	I0925 12:22:33.366239    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf1d589a549"
	I0925 12:22:33.380094    4893 logs.go:123] Gathering logs for kube-apiserver [b00200b3d354] ...
	I0925 12:22:33.380103    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b00200b3d354"
	I0925 12:22:33.394349    4893 logs.go:123] Gathering logs for etcd [d84c769abee1] ...
	I0925 12:22:33.394362    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d84c769abee1"
	I0925 12:22:33.409611    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:22:33.409624    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:22:33.435068    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:22:33.435075    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:22:33.447129    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:22:33.447140    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:22:33.447168    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:22:33.447173    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:22:33.447176    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:22:33.447180    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:22:33.447182    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:22:43.451296    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:22:48.452242    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:22:48.452351    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:22:48.464427    4893 logs.go:276] 2 containers: [b00200b3d354 a6b9bffba162]
	I0925 12:22:48.464519    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:22:48.479751    4893 logs.go:276] 2 containers: [b2d5ac54fa16 d84c769abee1]
	I0925 12:22:48.479842    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:22:48.491971    4893 logs.go:276] 1 containers: [51aa10e3317b]
	I0925 12:22:48.492059    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:22:48.504523    4893 logs.go:276] 2 containers: [f89f353ba7ff bfc63bd4a8f0]
	I0925 12:22:48.504610    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:22:48.517149    4893 logs.go:276] 1 containers: [da185f1cc653]
	I0925 12:22:48.517247    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:22:48.531252    4893 logs.go:276] 2 containers: [14ba24e33617 acf1d589a549]
	I0925 12:22:48.531348    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:22:48.542729    4893 logs.go:276] 0 containers: []
	W0925 12:22:48.542743    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:22:48.542823    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:22:48.555648    4893 logs.go:276] 1 containers: [797676f920e0]
	I0925 12:22:48.555668    4893 logs.go:123] Gathering logs for etcd [d84c769abee1] ...
	I0925 12:22:48.555673    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d84c769abee1"
	I0925 12:22:48.572683    4893 logs.go:123] Gathering logs for etcd [b2d5ac54fa16] ...
	I0925 12:22:48.572695    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d5ac54fa16"
	I0925 12:22:48.588459    4893 logs.go:123] Gathering logs for kube-apiserver [b00200b3d354] ...
	I0925 12:22:48.588472    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b00200b3d354"
	I0925 12:22:48.604701    4893 logs.go:123] Gathering logs for kube-scheduler [f89f353ba7ff] ...
	I0925 12:22:48.604719    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f89f353ba7ff"
	I0925 12:22:48.617419    4893 logs.go:123] Gathering logs for storage-provisioner [797676f920e0] ...
	I0925 12:22:48.617432    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 797676f920e0"
	I0925 12:22:48.630418    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:22:48.630432    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:22:48.669572    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:22:48.669669    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:22:48.670162    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:22:48.670170    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:22:48.708924    4893 logs.go:123] Gathering logs for coredns [51aa10e3317b] ...
	I0925 12:22:48.708938    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51aa10e3317b"
	I0925 12:22:48.721918    4893 logs.go:123] Gathering logs for kube-scheduler [bfc63bd4a8f0] ...
	I0925 12:22:48.721932    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfc63bd4a8f0"
	I0925 12:22:48.737684    4893 logs.go:123] Gathering logs for kube-controller-manager [14ba24e33617] ...
	I0925 12:22:48.737696    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ba24e33617"
	I0925 12:22:48.760369    4893 logs.go:123] Gathering logs for kube-controller-manager [acf1d589a549] ...
	I0925 12:22:48.760382    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf1d589a549"
	I0925 12:22:48.775715    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:22:48.775726    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:22:48.789862    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:22:48.789875    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:22:48.794995    4893 logs.go:123] Gathering logs for kube-proxy [da185f1cc653] ...
	I0925 12:22:48.795009    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da185f1cc653"
	I0925 12:22:48.808634    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:22:48.808646    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:22:48.834457    4893 logs.go:123] Gathering logs for kube-apiserver [a6b9bffba162] ...
	I0925 12:22:48.834474    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6b9bffba162"
	I0925 12:22:48.855957    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:22:48.855969    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:22:48.856000    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:22:48.856005    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:22:48.856008    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:22:48.856011    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:22:48.856014    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:22:58.859919    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:03.862208    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:03.862753    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:23:03.902011    4893 logs.go:276] 2 containers: [b00200b3d354 a6b9bffba162]
	I0925 12:23:03.902186    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:23:03.923906    4893 logs.go:276] 2 containers: [b2d5ac54fa16 d84c769abee1]
	I0925 12:23:03.924056    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:23:03.939077    4893 logs.go:276] 1 containers: [51aa10e3317b]
	I0925 12:23:03.939183    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:23:03.951369    4893 logs.go:276] 2 containers: [f89f353ba7ff bfc63bd4a8f0]
	I0925 12:23:03.951456    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:23:03.961755    4893 logs.go:276] 1 containers: [da185f1cc653]
	I0925 12:23:03.961830    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:23:03.972765    4893 logs.go:276] 2 containers: [14ba24e33617 acf1d589a549]
	I0925 12:23:03.972838    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:23:03.987706    4893 logs.go:276] 0 containers: []
	W0925 12:23:03.987717    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:23:03.987789    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:23:03.998472    4893 logs.go:276] 1 containers: [797676f920e0]
	I0925 12:23:03.998488    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:23:03.998494    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:23:04.010369    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:23:04.010383    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:23:04.045527    4893 logs.go:123] Gathering logs for etcd [b2d5ac54fa16] ...
	I0925 12:23:04.045540    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d5ac54fa16"
	I0925 12:23:04.067368    4893 logs.go:123] Gathering logs for kube-scheduler [f89f353ba7ff] ...
	I0925 12:23:04.067382    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f89f353ba7ff"
	I0925 12:23:04.079805    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:23:04.079821    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:23:04.116375    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:23:04.116469    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:23:04.116940    4893 logs.go:123] Gathering logs for kube-controller-manager [14ba24e33617] ...
	I0925 12:23:04.116946    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ba24e33617"
	I0925 12:23:04.135223    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:23:04.135237    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:23:04.158283    4893 logs.go:123] Gathering logs for kube-controller-manager [acf1d589a549] ...
	I0925 12:23:04.158291    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf1d589a549"
	I0925 12:23:04.172348    4893 logs.go:123] Gathering logs for storage-provisioner [797676f920e0] ...
	I0925 12:23:04.172360    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 797676f920e0"
	I0925 12:23:04.183851    4893 logs.go:123] Gathering logs for kube-apiserver [a6b9bffba162] ...
	I0925 12:23:04.183862    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6b9bffba162"
	I0925 12:23:04.204023    4893 logs.go:123] Gathering logs for etcd [d84c769abee1] ...
	I0925 12:23:04.204034    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d84c769abee1"
	I0925 12:23:04.218788    4893 logs.go:123] Gathering logs for kube-scheduler [bfc63bd4a8f0] ...
	I0925 12:23:04.218800    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfc63bd4a8f0"
	I0925 12:23:04.234648    4893 logs.go:123] Gathering logs for kube-proxy [da185f1cc653] ...
	I0925 12:23:04.234660    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da185f1cc653"
	I0925 12:23:04.246809    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:23:04.246824    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:23:04.251067    4893 logs.go:123] Gathering logs for kube-apiserver [b00200b3d354] ...
	I0925 12:23:04.251078    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b00200b3d354"
	I0925 12:23:04.264713    4893 logs.go:123] Gathering logs for coredns [51aa10e3317b] ...
	I0925 12:23:04.264729    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51aa10e3317b"
	I0925 12:23:04.280063    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:23:04.280075    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:23:04.280101    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:23:04.280106    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:23:04.280109    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:23:04.280113    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:23:04.280117    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:23:14.282336    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:19.284485    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:19.284800    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:23:19.311676    4893 logs.go:276] 2 containers: [b00200b3d354 a6b9bffba162]
	I0925 12:23:19.311825    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:23:19.328782    4893 logs.go:276] 2 containers: [b2d5ac54fa16 d84c769abee1]
	I0925 12:23:19.328878    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:23:19.342083    4893 logs.go:276] 1 containers: [51aa10e3317b]
	I0925 12:23:19.342181    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:23:19.353505    4893 logs.go:276] 2 containers: [f89f353ba7ff bfc63bd4a8f0]
	I0925 12:23:19.353589    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:23:19.371965    4893 logs.go:276] 1 containers: [da185f1cc653]
	I0925 12:23:19.372047    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:23:19.382707    4893 logs.go:276] 2 containers: [14ba24e33617 acf1d589a549]
	I0925 12:23:19.382793    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:23:19.394110    4893 logs.go:276] 0 containers: []
	W0925 12:23:19.394122    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:23:19.394191    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:23:19.404792    4893 logs.go:276] 1 containers: [797676f920e0]
	I0925 12:23:19.404808    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:23:19.404813    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:23:19.409899    4893 logs.go:123] Gathering logs for etcd [d84c769abee1] ...
	I0925 12:23:19.409907    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d84c769abee1"
	I0925 12:23:19.426306    4893 logs.go:123] Gathering logs for kube-controller-manager [14ba24e33617] ...
	I0925 12:23:19.426315    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ba24e33617"
	I0925 12:23:19.443469    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:23:19.443480    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:23:19.479426    4893 logs.go:123] Gathering logs for kube-apiserver [a6b9bffba162] ...
	I0925 12:23:19.479439    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6b9bffba162"
	I0925 12:23:19.498563    4893 logs.go:123] Gathering logs for etcd [b2d5ac54fa16] ...
	I0925 12:23:19.498574    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d5ac54fa16"
	I0925 12:23:19.513168    4893 logs.go:123] Gathering logs for coredns [51aa10e3317b] ...
	I0925 12:23:19.513189    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51aa10e3317b"
	I0925 12:23:19.531374    4893 logs.go:123] Gathering logs for kube-scheduler [bfc63bd4a8f0] ...
	I0925 12:23:19.531386    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfc63bd4a8f0"
	I0925 12:23:19.548374    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:23:19.548386    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:23:19.584020    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:23:19.584110    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:23:19.584574    4893 logs.go:123] Gathering logs for kube-proxy [da185f1cc653] ...
	I0925 12:23:19.584579    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da185f1cc653"
	I0925 12:23:19.595952    4893 logs.go:123] Gathering logs for storage-provisioner [797676f920e0] ...
	I0925 12:23:19.595962    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 797676f920e0"
	I0925 12:23:19.607434    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:23:19.607447    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:23:19.631067    4893 logs.go:123] Gathering logs for kube-apiserver [b00200b3d354] ...
	I0925 12:23:19.631075    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b00200b3d354"
	I0925 12:23:19.644536    4893 logs.go:123] Gathering logs for kube-scheduler [f89f353ba7ff] ...
	I0925 12:23:19.644547    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f89f353ba7ff"
	I0925 12:23:19.660151    4893 logs.go:123] Gathering logs for kube-controller-manager [acf1d589a549] ...
	I0925 12:23:19.660161    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf1d589a549"
	I0925 12:23:19.674006    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:23:19.674019    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:23:19.685793    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:23:19.685805    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:23:19.685834    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:23:19.685839    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:23:19.685843    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:23:19.685846    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:23:19.685887    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:23:29.689534    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:34.691926    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:34.692005    4893 kubeadm.go:597] duration metric: took 4m7.961956834s to restartPrimaryControlPlane
	W0925 12:23:34.692089    4893 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0925 12:23:34.692128    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0925 12:23:35.666325    4893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 12:23:35.671399    4893 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 12:23:35.674203    4893 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 12:23:35.677158    4893 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 12:23:35.677164    4893 kubeadm.go:157] found existing configuration files:
	
	I0925 12:23:35.677189    4893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/admin.conf
	I0925 12:23:35.680229    4893 kubeadm.go:163] "https://control-plane.minikube.internal:50275" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0925 12:23:35.680261    4893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0925 12:23:35.682924    4893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/kubelet.conf
	I0925 12:23:35.685475    4893 kubeadm.go:163] "https://control-plane.minikube.internal:50275" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0925 12:23:35.685502    4893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0925 12:23:35.688598    4893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/controller-manager.conf
	I0925 12:23:35.691259    4893 kubeadm.go:163] "https://control-plane.minikube.internal:50275" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0925 12:23:35.691286    4893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0925 12:23:35.693811    4893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/scheduler.conf
	I0925 12:23:35.696760    4893 kubeadm.go:163] "https://control-plane.minikube.internal:50275" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0925 12:23:35.696785    4893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0925 12:23:35.699710    4893 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0925 12:23:35.716678    4893 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0925 12:23:35.716724    4893 kubeadm.go:310] [preflight] Running pre-flight checks
	I0925 12:23:35.762380    4893 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 12:23:35.762437    4893 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 12:23:35.762481    4893 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 12:23:35.813065    4893 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 12:23:35.817277    4893 out.go:235]   - Generating certificates and keys ...
	I0925 12:23:35.817309    4893 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0925 12:23:35.817341    4893 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0925 12:23:35.817388    4893 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0925 12:23:35.817423    4893 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0925 12:23:35.817477    4893 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0925 12:23:35.817507    4893 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0925 12:23:35.817553    4893 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0925 12:23:35.817588    4893 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0925 12:23:35.817630    4893 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0925 12:23:35.817671    4893 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0925 12:23:35.817694    4893 kubeadm.go:310] [certs] Using the existing "sa" key
	I0925 12:23:35.817724    4893 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 12:23:35.918260    4893 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 12:23:35.989165    4893 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 12:23:36.083759    4893 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 12:23:36.137035    4893 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 12:23:36.167863    4893 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 12:23:36.168269    4893 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 12:23:36.168323    4893 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0925 12:23:36.242981    4893 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 12:23:36.247479    4893 out.go:235]   - Booting up control plane ...
	I0925 12:23:36.247532    4893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 12:23:36.247568    4893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 12:23:36.247606    4893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 12:23:36.247648    4893 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 12:23:36.247727    4893 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 12:23:40.247786    4893 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001918 seconds
	I0925 12:23:40.247921    4893 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 12:23:40.251705    4893 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 12:23:40.767495    4893 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 12:23:40.767931    4893 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-796000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0925 12:23:41.272593    4893 kubeadm.go:310] [bootstrap-token] Using token: lqziip.t5ibeo2co01a4zx2
	I0925 12:23:41.277900    4893 out.go:235]   - Configuring RBAC rules ...
	I0925 12:23:41.277964    4893 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 12:23:41.278015    4893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 12:23:41.284695    4893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 12:23:41.285678    4893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 12:23:41.286626    4893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 12:23:41.287552    4893 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 12:23:41.291279    4893 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 12:23:41.468167    4893 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0925 12:23:41.677328    4893 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0925 12:23:41.677805    4893 kubeadm.go:310] 
	I0925 12:23:41.677838    4893 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0925 12:23:41.677841    4893 kubeadm.go:310] 
	I0925 12:23:41.677891    4893 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0925 12:23:41.677896    4893 kubeadm.go:310] 
	I0925 12:23:41.677909    4893 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0925 12:23:41.677938    4893 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 12:23:41.677962    4893 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 12:23:41.677964    4893 kubeadm.go:310] 
	I0925 12:23:41.677993    4893 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0925 12:23:41.677995    4893 kubeadm.go:310] 
	I0925 12:23:41.678017    4893 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0925 12:23:41.678019    4893 kubeadm.go:310] 
	I0925 12:23:41.678048    4893 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0925 12:23:41.678088    4893 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 12:23:41.678128    4893 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 12:23:41.678130    4893 kubeadm.go:310] 
	I0925 12:23:41.678175    4893 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 12:23:41.678214    4893 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0925 12:23:41.678217    4893 kubeadm.go:310] 
	I0925 12:23:41.678270    4893 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lqziip.t5ibeo2co01a4zx2 \
	I0925 12:23:41.678325    4893 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e51346daa4df67057de8045209492e1d5416aabfe1ee2597d0ef678584899cc1 \
	I0925 12:23:41.678338    4893 kubeadm.go:310] 	--control-plane 
	I0925 12:23:41.678341    4893 kubeadm.go:310] 
	I0925 12:23:41.678400    4893 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0925 12:23:41.678405    4893 kubeadm.go:310] 
	I0925 12:23:41.678448    4893 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lqziip.t5ibeo2co01a4zx2 \
	I0925 12:23:41.678516    4893 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e51346daa4df67057de8045209492e1d5416aabfe1ee2597d0ef678584899cc1 
	I0925 12:23:41.678575    4893 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 12:23:41.678632    4893 cni.go:84] Creating CNI manager for ""
	I0925 12:23:41.678641    4893 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:23:41.681719    4893 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 12:23:41.689831    4893 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 12:23:41.692956    4893 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0925 12:23:41.698968    4893 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 12:23:41.699051    4893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 12:23:41.699051    4893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-796000 minikube.k8s.io/updated_at=2024_09_25T12_23_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=cb9e6220ecbd737c1d09ad9630c6f144f437664a minikube.k8s.io/name=running-upgrade-796000 minikube.k8s.io/primary=true
	I0925 12:23:41.739446    4893 kubeadm.go:1113] duration metric: took 40.444083ms to wait for elevateKubeSystemPrivileges
	I0925 12:23:41.739460    4893 ops.go:34] apiserver oom_adj: -16
	I0925 12:23:41.739544    4893 kubeadm.go:394] duration metric: took 4m15.023288792s to StartCluster
	I0925 12:23:41.739556    4893 settings.go:142] acquiring lock: {Name:mk3a21ccfd977fa63a309ae265edad20537229ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:23:41.739651    4893 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:23:41.740033    4893 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/kubeconfig: {Name:mkc011f0309eba8a9546287478e16310d103c97e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:23:41.740256    4893 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:23:41.740261    4893 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0925 12:23:41.740297    4893 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-796000"
	I0925 12:23:41.740306    4893 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-796000"
	W0925 12:23:41.740334    4893 addons.go:243] addon storage-provisioner should already be in state true
	I0925 12:23:41.740336    4893 config.go:182] Loaded profile config "running-upgrade-796000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:23:41.740347    4893 host.go:66] Checking if "running-upgrade-796000" exists ...
	I0925 12:23:41.740322    4893 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-796000"
	I0925 12:23:41.740358    4893 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-796000"
	I0925 12:23:41.741209    4893 kapi.go:59] client config for running-upgrade-796000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/client.key", CAFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10619a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0925 12:23:41.741330    4893 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-796000"
	W0925 12:23:41.741335    4893 addons.go:243] addon default-storageclass should already be in state true
	I0925 12:23:41.741342    4893 host.go:66] Checking if "running-upgrade-796000" exists ...
	I0925 12:23:41.744576    4893 out.go:177] * Verifying Kubernetes components...
	I0925 12:23:41.744874    4893 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 12:23:41.749247    4893 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 12:23:41.749253    4893 sshutil.go:53] new ssh client: &{IP:localhost Port:50243 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/running-upgrade-796000/id_rsa Username:docker}
	I0925 12:23:41.752778    4893 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:23:41.756745    4893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:23:41.760772    4893 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 12:23:41.760782    4893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 12:23:41.760789    4893 sshutil.go:53] new ssh client: &{IP:localhost Port:50243 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/running-upgrade-796000/id_rsa Username:docker}
	I0925 12:23:41.847259    4893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0925 12:23:41.852089    4893 api_server.go:52] waiting for apiserver process to appear ...
	I0925 12:23:41.852140    4893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 12:23:41.855987    4893 api_server.go:72] duration metric: took 115.722709ms to wait for apiserver process to appear ...
	I0925 12:23:41.855995    4893 api_server.go:88] waiting for apiserver healthz status ...
	I0925 12:23:41.856002    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:41.876964    4893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 12:23:41.901300    4893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 12:23:42.213346    4893 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0925 12:23:42.213358    4893 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0925 12:23:46.858018    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:46.858070    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:51.858437    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:51.858476    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:56.858843    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:56.858886    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:01.859640    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:01.859687    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:06.860375    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:06.860406    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:11.861444    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:11.861463    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0925 12:24:12.215094    4893 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0925 12:24:12.218423    4893 out.go:177] * Enabled addons: storage-provisioner
	I0925 12:24:12.226318    4893 addons.go:510] duration metric: took 30.486622416s for enable addons: enabled=[storage-provisioner]
	I0925 12:24:16.862685    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:16.862745    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:21.864435    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:21.864533    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:26.865978    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:26.866023    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:31.868196    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:31.868229    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:36.868437    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:36.868480    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:41.870596    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:41.870723    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:41.881468    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:24:41.881566    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:41.891908    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:24:41.891996    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:41.902336    4893 logs.go:276] 2 containers: [2cf271d59fa5 578e7ca35890]
	I0925 12:24:41.902409    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:41.912871    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:24:41.912942    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:41.923193    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:24:41.923283    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:41.933880    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:24:41.933958    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:41.943769    4893 logs.go:276] 0 containers: []
	W0925 12:24:41.943779    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:41.943838    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:41.953980    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:24:41.953997    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:41.954003    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:41.958666    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:24:41.958673    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:24:41.972927    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:24:41.972937    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:24:41.987712    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:24:41.987723    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:24:42.002200    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:24:42.002211    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:24:42.014166    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:42.014181    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:24:42.037606    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:42.037613    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:24:42.055670    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:24:42.055761    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:24:42.071800    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:42.071804    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:42.138045    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:24:42.138056    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:24:42.151833    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:24:42.151843    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:24:42.163601    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:24:42.163612    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:24:42.175798    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:24:42.175808    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:24:42.197274    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:24:42.197282    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:42.208586    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:24:42.208598    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:24:42.208625    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:24:42.208632    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:24:42.208636    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:24:42.208640    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:24:42.208643    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:24:52.211833    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:57.214425    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:57.214745    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:57.240621    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:24:57.240773    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:57.258080    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:24:57.258192    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:57.271619    4893 logs.go:276] 2 containers: [2cf271d59fa5 578e7ca35890]
	I0925 12:24:57.271715    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:57.283033    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:24:57.283109    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:57.292943    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:24:57.293038    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:57.310600    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:24:57.310686    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:57.320329    4893 logs.go:276] 0 containers: []
	W0925 12:24:57.320341    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:57.320415    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:57.330443    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:24:57.330458    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:24:57.330464    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:24:57.347625    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:57.347634    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:57.387294    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:24:57.387309    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:24:57.405383    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:24:57.405393    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:24:57.424924    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:24:57.424935    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:24:57.439348    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:24:57.439361    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:24:57.451084    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:24:57.451094    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:24:57.470875    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:24:57.470885    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:24:57.485039    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:24:57.485050    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:24:57.496291    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:24:57.496308    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:57.507852    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:57.507862    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:24:57.526730    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:24:57.526824    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:24:57.542739    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:57.542750    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:57.547709    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:57.547719    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:24:57.572129    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:24:57.572141    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:24:57.572167    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:24:57.572192    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:24:57.572197    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:24:57.572201    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:24:57.572205    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:25:07.576105    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:12.578261    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:12.578549    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:12.611117    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:25:12.611244    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:12.627061    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:25:12.627152    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:12.640405    4893 logs.go:276] 2 containers: [2cf271d59fa5 578e7ca35890]
	I0925 12:25:12.640493    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:12.651561    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:25:12.651646    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:12.662774    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:25:12.662873    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:12.673293    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:25:12.673380    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:12.683425    4893 logs.go:276] 0 containers: []
	W0925 12:25:12.683438    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:12.683513    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:12.693776    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:25:12.693792    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:25:12.693797    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:25:12.707518    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:25:12.707531    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:25:12.722357    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:25:12.722366    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:12.734602    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:12.734618    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:12.771012    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:25:12.771023    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:25:12.785024    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:25:12.785035    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:25:12.798985    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:25:12.798996    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:25:12.812044    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:25:12.812057    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:25:12.829996    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:25:12.830008    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:25:12.841711    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:12.841722    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:12.865029    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:12.865036    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:25:12.883248    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:25:12.883340    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:25:12.899264    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:12.899272    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:12.903903    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:25:12.903913    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:25:12.918995    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:25:12.919004    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:25:12.919030    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:25:12.919036    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:25:12.919040    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:25:12.919043    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:25:12.919046    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:25:22.921592    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:27.923816    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:27.924333    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:27.957289    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:25:27.957457    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:27.977297    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:25:27.977415    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:27.991671    4893 logs.go:276] 2 containers: [2cf271d59fa5 578e7ca35890]
	I0925 12:25:27.991753    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:28.003969    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:25:28.004043    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:28.015210    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:25:28.015304    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:28.026211    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:25:28.026297    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:28.037031    4893 logs.go:276] 0 containers: []
	W0925 12:25:28.037044    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:28.037112    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:28.048156    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:25:28.048172    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:25:28.048178    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:25:28.060485    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:28.060498    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:25:28.080780    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:25:28.080871    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:25:28.096456    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:28.096464    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:28.101181    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:25:28.101188    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:25:28.115851    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:25:28.115862    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:25:28.129819    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:25:28.129831    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:25:28.148160    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:25:28.148173    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:25:28.163865    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:25:28.163881    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:25:28.182008    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:28.182021    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:28.206445    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:28.206452    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:28.241312    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:25:28.241323    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:25:28.253518    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:25:28.253534    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:25:28.266392    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:25:28.266408    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:28.278179    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:25:28.278190    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:25:28.278218    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:25:28.278223    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:25:28.278259    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:25:28.278275    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:25:28.278286    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:25:38.282291    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:43.284679    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:43.284809    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:43.296005    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:25:43.296100    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:43.306424    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:25:43.306508    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:43.316919    4893 logs.go:276] 2 containers: [2cf271d59fa5 578e7ca35890]
	I0925 12:25:43.317006    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:43.327623    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:25:43.327698    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:43.338108    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:25:43.338191    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:43.348145    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:25:43.348218    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:43.359191    4893 logs.go:276] 0 containers: []
	W0925 12:25:43.359201    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:43.359279    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:43.369581    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:25:43.369597    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:43.369604    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:43.374217    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:25:43.374225    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:25:43.388217    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:25:43.388226    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:25:43.404784    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:25:43.404794    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:25:43.416495    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:25:43.416506    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:25:43.434204    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:43.434214    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:43.458648    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:25:43.458659    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:43.469754    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:43.469767    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:25:43.489019    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:25:43.489110    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:25:43.504966    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:43.504973    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:43.540700    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:25:43.540714    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:25:43.555385    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:25:43.555399    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:25:43.567422    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:25:43.567437    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:25:43.579278    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:25:43.579294    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:25:43.590761    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:25:43.590771    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:25:43.590797    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:25:43.590805    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:25:43.590809    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:25:43.590815    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:25:43.590818    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:25:53.594782    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:58.597156    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:58.597501    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:58.628879    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:25:58.629034    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:58.648343    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:25:58.648452    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:58.668877    4893 logs.go:276] 4 containers: [7e4c37f8e257 5a5096007204 2cf271d59fa5 578e7ca35890]
	I0925 12:25:58.668975    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:58.679872    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:25:58.679964    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:58.690470    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:25:58.690556    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:58.701248    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:25:58.701326    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:58.716724    4893 logs.go:276] 0 containers: []
	W0925 12:25:58.716736    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:58.716807    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:58.728072    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:25:58.728088    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:58.728093    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:25:58.748006    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:25:58.748098    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:25:58.763762    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:25:58.763771    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:25:58.779236    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:25:58.779247    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:58.791054    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:25:58.791063    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:25:58.803999    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:25:58.804015    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:25:58.815716    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:58.815726    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:58.842200    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:25:58.842208    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:25:58.856418    4893 logs.go:123] Gathering logs for coredns [5a5096007204] ...
	I0925 12:25:58.856430    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a5096007204"
	I0925 12:25:58.872953    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:25:58.872966    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:25:58.890858    4893 logs.go:123] Gathering logs for coredns [7e4c37f8e257] ...
	I0925 12:25:58.890869    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4c37f8e257"
	I0925 12:25:58.903393    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:25:58.903406    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:25:58.918961    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:25:58.918972    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:25:58.934350    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:58.934360    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:58.938628    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:58.938636    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:58.979242    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:25:58.979257    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:25:58.993987    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:25:58.993996    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:25:58.994022    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:25:58.994027    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:25:58.994030    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:25:58.994034    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:25:58.994037    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:26:08.997980    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:13.998378    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:13.998652    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:14.021562    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:26:14.021690    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:14.036876    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:26:14.036977    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:14.049433    4893 logs.go:276] 4 containers: [7e4c37f8e257 5a5096007204 2cf271d59fa5 578e7ca35890]
	I0925 12:26:14.049517    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:14.060568    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:26:14.060654    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:14.071304    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:26:14.071386    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:14.081568    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:26:14.081654    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:14.093562    4893 logs.go:276] 0 containers: []
	W0925 12:26:14.093574    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:14.093644    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:14.104138    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:26:14.104156    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:14.104162    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:14.141128    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:26:14.141139    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:26:14.157183    4893 logs.go:123] Gathering logs for coredns [7e4c37f8e257] ...
	I0925 12:26:14.157193    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4c37f8e257"
	I0925 12:26:14.169572    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:26:14.169586    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:26:14.181528    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:14.181538    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:14.206283    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:26:14.206295    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:14.218445    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:14.218456    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:26:14.239145    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:26:14.239237    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:26:14.255055    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:14.255064    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:14.261353    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:26:14.261362    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:26:14.276901    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:26:14.276911    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:26:14.295073    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:26:14.295082    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:26:14.306487    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:26:14.306496    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:26:14.324436    4893 logs.go:123] Gathering logs for coredns [5a5096007204] ...
	I0925 12:26:14.324446    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a5096007204"
	I0925 12:26:14.342340    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:26:14.342350    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:26:14.353508    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:26:14.353523    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:26:14.364875    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:26:14.364885    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:26:14.364912    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:26:14.364916    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:26:14.364919    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:26:14.364926    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:26:14.364929    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:26:24.368903    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:29.371574    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:29.371788    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:29.398097    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:26:29.398222    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:29.412796    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:26:29.412884    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:29.425327    4893 logs.go:276] 4 containers: [7e4c37f8e257 5a5096007204 2cf271d59fa5 578e7ca35890]
	I0925 12:26:29.425416    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:29.437611    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:26:29.437695    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:29.448324    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:26:29.448403    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:29.459892    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:26:29.459977    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:29.477324    4893 logs.go:276] 0 containers: []
	W0925 12:26:29.477338    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:29.477405    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:29.487935    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:26:29.487952    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:29.487958    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:29.512258    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:26:29.512268    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:29.524055    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:26:29.524066    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:26:29.539787    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:26:29.539797    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:26:29.558851    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:26:29.558863    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:26:29.570592    4893 logs.go:123] Gathering logs for coredns [7e4c37f8e257] ...
	I0925 12:26:29.570603    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4c37f8e257"
	I0925 12:26:29.582528    4893 logs.go:123] Gathering logs for coredns [5a5096007204] ...
	I0925 12:26:29.582540    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a5096007204"
	I0925 12:26:29.595156    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:26:29.595166    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:26:29.613385    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:29.613403    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:26:29.633666    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:26:29.633761    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:26:29.650714    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:26:29.650723    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:26:29.669359    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:26:29.669376    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:26:29.682572    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:26:29.682583    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:26:29.694180    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:29.694190    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:29.698439    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:29.698445    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:29.734382    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:26:29.734393    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:26:29.746489    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:26:29.746500    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:26:29.746531    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:26:29.746535    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:26:29.746538    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:26:29.746543    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:26:29.746545    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:26:39.750568    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:44.752799    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:44.752947    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:44.766528    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:26:44.766648    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:44.778276    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:26:44.778371    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:44.788861    4893 logs.go:276] 4 containers: [7e4c37f8e257 5a5096007204 2cf271d59fa5 578e7ca35890]
	I0925 12:26:44.788953    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:44.806752    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:26:44.806852    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:44.817510    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:26:44.817591    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:44.827549    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:26:44.827635    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:44.837905    4893 logs.go:276] 0 containers: []
	W0925 12:26:44.837915    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:44.837979    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:44.848007    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:26:44.848025    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:44.848030    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:44.852651    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:44.852657    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:44.891357    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:26:44.891371    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:26:44.906343    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:26:44.906354    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:44.918940    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:44.918950    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:26:44.937060    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:26:44.937151    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:26:44.953069    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:26:44.953077    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:26:44.966567    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:26:44.966577    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:26:44.984505    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:26:44.984516    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:26:44.995887    4893 logs.go:123] Gathering logs for coredns [5a5096007204] ...
	I0925 12:26:44.995898    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a5096007204"
	I0925 12:26:45.007874    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:26:45.007882    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:26:45.019963    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:45.019973    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:45.045081    4893 logs.go:123] Gathering logs for coredns [7e4c37f8e257] ...
	I0925 12:26:45.045089    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4c37f8e257"
	I0925 12:26:45.056632    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:26:45.056642    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:26:45.067692    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:26:45.067702    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:26:45.082250    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:26:45.082258    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:26:45.097727    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:26:45.097736    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:26:45.097763    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:26:45.097767    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:26:45.097770    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:26:45.097775    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:26:45.097778    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:26:55.100422    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:00.102548    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:00.102661    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:27:00.113882    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:27:00.113967    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:27:00.124690    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:27:00.124767    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:27:00.137604    4893 logs.go:276] 4 containers: [7e4c37f8e257 5a5096007204 2cf271d59fa5 578e7ca35890]
	I0925 12:27:00.137692    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:27:00.147944    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:27:00.148032    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:27:00.158661    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:27:00.158740    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:27:00.168811    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:27:00.168903    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:27:00.181607    4893 logs.go:276] 0 containers: []
	W0925 12:27:00.181619    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:27:00.181692    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:27:00.192375    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:27:00.192394    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:27:00.192400    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:27:00.227872    4893 logs.go:123] Gathering logs for coredns [5a5096007204] ...
	I0925 12:27:00.227884    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a5096007204"
	I0925 12:27:00.253405    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:27:00.253416    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:27:00.276366    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:27:00.276382    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:27:00.289681    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:27:00.289696    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:27:00.313927    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:27:00.313940    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:27:00.328974    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:27:00.328987    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:27:00.340507    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:27:00.340519    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:27:00.358331    4893 logs.go:123] Gathering logs for coredns [7e4c37f8e257] ...
	I0925 12:27:00.358340    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4c37f8e257"
	I0925 12:27:00.369794    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:27:00.369810    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:27:00.381955    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:27:00.381964    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:27:00.402196    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:27:00.402215    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:27:00.414948    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:27:00.414964    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:27:00.426698    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:27:00.426708    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:27:00.446430    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:27:00.446522    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:27:00.462172    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:27:00.462179    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:27:00.466969    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:27:00.466976    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:27:00.466998    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:27:00.467004    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:27:00.467007    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:27:00.467012    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:27:00.467014    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:27:10.470930    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:15.473107    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:15.473287    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:27:15.490548    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:27:15.490651    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:27:15.504984    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:27:15.505069    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:27:15.516037    4893 logs.go:276] 4 containers: [7e4c37f8e257 5a5096007204 2cf271d59fa5 578e7ca35890]
	I0925 12:27:15.516129    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:27:15.533688    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:27:15.533777    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:27:15.546279    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:27:15.546356    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:27:15.556798    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:27:15.556873    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:27:15.566883    4893 logs.go:276] 0 containers: []
	W0925 12:27:15.566902    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:27:15.566966    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:27:15.577977    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:27:15.578000    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:27:15.578005    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:27:15.590092    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:27:15.590104    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:27:15.605968    4893 logs.go:123] Gathering logs for coredns [5a5096007204] ...
	I0925 12:27:15.605978    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a5096007204"
	I0925 12:27:15.617930    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:27:15.617941    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:27:15.630607    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:27:15.630616    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:27:15.645905    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:27:15.645915    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:27:15.663755    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:27:15.663764    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:27:15.702299    4893 logs.go:123] Gathering logs for coredns [7e4c37f8e257] ...
	I0925 12:27:15.702309    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4c37f8e257"
	I0925 12:27:15.714397    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:27:15.714407    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:27:15.726265    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:27:15.726278    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:27:15.746604    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:27:15.746701    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:27:15.762661    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:27:15.762670    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:27:15.767276    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:27:15.767287    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:27:15.781567    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:27:15.781577    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:27:15.794117    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:27:15.794128    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:27:15.806625    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:27:15.806638    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:27:15.830301    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:27:15.830311    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:27:15.830336    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:27:15.830340    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:27:15.830345    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:27:15.830348    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:27:15.830351    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:27:25.834305    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:30.836450    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:30.836591    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:27:30.857426    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:27:30.857511    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:27:30.874698    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:27:30.874784    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:27:30.885522    4893 logs.go:276] 4 containers: [7e4c37f8e257 5a5096007204 2cf271d59fa5 578e7ca35890]
	I0925 12:27:30.885605    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:27:30.900393    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:27:30.900480    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:27:30.911851    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:27:30.911935    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:27:30.923120    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:27:30.923206    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:27:30.933484    4893 logs.go:276] 0 containers: []
	W0925 12:27:30.933497    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:27:30.933568    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:27:30.944452    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:27:30.944468    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:27:30.944473    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:27:30.962441    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:27:30.962534    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:27:30.978183    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:27:30.978191    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:27:30.982917    4893 logs.go:123] Gathering logs for coredns [5a5096007204] ...
	I0925 12:27:30.982925    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a5096007204"
	I0925 12:27:30.994696    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:27:30.994707    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:27:31.010488    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:27:31.010499    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:27:31.046251    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:27:31.046266    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:27:31.060597    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:27:31.060609    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:27:31.078874    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:27:31.078883    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:27:31.091612    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:27:31.091624    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:27:31.103032    4893 logs.go:123] Gathering logs for coredns [7e4c37f8e257] ...
	I0925 12:27:31.103043    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4c37f8e257"
	I0925 12:27:31.114612    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:27:31.114621    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:27:31.126012    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:27:31.126022    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:27:31.137632    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:27:31.137646    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:27:31.155126    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:27:31.155136    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:27:31.180043    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:27:31.180052    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:27:31.191490    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:27:31.191499    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:27:31.191527    4893 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0925 12:27:31.191532    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:27:31.191535    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	  Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:27:31.191539    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:27:31.191543    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:27:41.195525    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:46.197102    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:46.201470    4893 out.go:201] 
	W0925 12:27:46.205391    4893 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0925 12:27:46.205398    4893 out.go:270] * 
	* 
	W0925 12:27:46.205835    4893 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:27:46.217397    4893 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-796000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-25 12:27:46.291423 -0700 PDT m=+3542.392255251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-796000 -n running-upgrade-796000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-796000 -n running-upgrade-796000: exit status 2 (15.722311666s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-796000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-093000          | force-systemd-flag-093000 | jenkins | v1.34.0 | 25 Sep 24 12:17 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-884000              | force-systemd-env-884000  | jenkins | v1.34.0 | 25 Sep 24 12:17 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-884000           | force-systemd-env-884000  | jenkins | v1.34.0 | 25 Sep 24 12:17 PDT | 25 Sep 24 12:17 PDT |
	| start   | -p docker-flags-398000                | docker-flags-398000       | jenkins | v1.34.0 | 25 Sep 24 12:17 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-093000             | force-systemd-flag-093000 | jenkins | v1.34.0 | 25 Sep 24 12:17 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-093000          | force-systemd-flag-093000 | jenkins | v1.34.0 | 25 Sep 24 12:17 PDT | 25 Sep 24 12:17 PDT |
	| start   | -p cert-expiration-271000             | cert-expiration-271000    | jenkins | v1.34.0 | 25 Sep 24 12:17 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-398000 ssh               | docker-flags-398000       | jenkins | v1.34.0 | 25 Sep 24 12:18 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-398000 ssh               | docker-flags-398000       | jenkins | v1.34.0 | 25 Sep 24 12:18 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-398000                | docker-flags-398000       | jenkins | v1.34.0 | 25 Sep 24 12:18 PDT | 25 Sep 24 12:18 PDT |
	| start   | -p cert-options-322000                | cert-options-322000       | jenkins | v1.34.0 | 25 Sep 24 12:18 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-322000 ssh               | cert-options-322000       | jenkins | v1.34.0 | 25 Sep 24 12:18 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-322000 -- sudo        | cert-options-322000       | jenkins | v1.34.0 | 25 Sep 24 12:18 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-322000                | cert-options-322000       | jenkins | v1.34.0 | 25 Sep 24 12:18 PDT | 25 Sep 24 12:18 PDT |
	| start   | -p running-upgrade-796000             | minikube                  | jenkins | v1.26.0 | 25 Sep 24 12:18 PDT | 25 Sep 24 12:19 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-796000             | running-upgrade-796000    | jenkins | v1.34.0 | 25 Sep 24 12:19 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-271000             | cert-expiration-271000    | jenkins | v1.34.0 | 25 Sep 24 12:21 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-271000             | cert-expiration-271000    | jenkins | v1.34.0 | 25 Sep 24 12:21 PDT | 25 Sep 24 12:21 PDT |
	| start   | -p kubernetes-upgrade-378000          | kubernetes-upgrade-378000 | jenkins | v1.34.0 | 25 Sep 24 12:21 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-378000          | kubernetes-upgrade-378000 | jenkins | v1.34.0 | 25 Sep 24 12:21 PDT | 25 Sep 24 12:21 PDT |
	| start   | -p kubernetes-upgrade-378000          | kubernetes-upgrade-378000 | jenkins | v1.34.0 | 25 Sep 24 12:21 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-378000          | kubernetes-upgrade-378000 | jenkins | v1.34.0 | 25 Sep 24 12:21 PDT | 25 Sep 24 12:21 PDT |
	| start   | -p stopped-upgrade-814000             | minikube                  | jenkins | v1.26.0 | 25 Sep 24 12:21 PDT | 25 Sep 24 12:22 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-814000 stop           | minikube                  | jenkins | v1.26.0 | 25 Sep 24 12:22 PDT | 25 Sep 24 12:22 PDT |
	| start   | -p stopped-upgrade-814000             | stopped-upgrade-814000    | jenkins | v1.34.0 | 25 Sep 24 12:22 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/25 12:22:24
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 12:22:24.792093    5014 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:22:24.792240    5014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:22:24.792244    5014 out.go:358] Setting ErrFile to fd 2...
	I0925 12:22:24.792247    5014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:22:24.792406    5014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:22:24.793543    5014 out.go:352] Setting JSON to false
	I0925 12:22:24.812242    5014 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4915,"bootTime":1727287229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:22:24.812318    5014 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:22:24.817777    5014 out.go:177] * [stopped-upgrade-814000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:22:24.825707    5014 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:22:24.825792    5014 notify.go:220] Checking for updates...
	I0925 12:22:24.832802    5014 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:22:24.835754    5014 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:22:24.839753    5014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:22:24.842690    5014 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:22:24.845760    5014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:22:24.849073    5014 config.go:182] Loaded profile config "stopped-upgrade-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:22:24.852705    5014 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0925 12:22:24.855713    5014 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:22:24.859742    5014 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 12:22:24.866714    5014 start.go:297] selected driver: qemu2
	I0925 12:22:24.866722    5014 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50513 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0925 12:22:24.866767    5014 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:22:24.869477    5014 cni.go:84] Creating CNI manager for ""
	I0925 12:22:24.869512    5014 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:22:24.869537    5014 start.go:340] cluster config:
	{Name:stopped-upgrade-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50513 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0925 12:22:24.869605    5014 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:22:24.875779    5014 out.go:177] * Starting "stopped-upgrade-814000" primary control-plane node in "stopped-upgrade-814000" cluster
	I0925 12:22:24.879736    5014 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0925 12:22:24.879751    5014 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0925 12:22:24.879759    5014 cache.go:56] Caching tarball of preloaded images
	I0925 12:22:24.879817    5014 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:22:24.879823    5014 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0925 12:22:24.879880    5014 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/config.json ...
	I0925 12:22:24.880333    5014 start.go:360] acquireMachinesLock for stopped-upgrade-814000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:22:24.880363    5014 start.go:364] duration metric: took 23.541µs to acquireMachinesLock for "stopped-upgrade-814000"
	I0925 12:22:24.880373    5014 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:22:24.880379    5014 fix.go:54] fixHost starting: 
	I0925 12:22:24.880497    5014 fix.go:112] recreateIfNeeded on stopped-upgrade-814000: state=Stopped err=<nil>
	W0925 12:22:24.880505    5014 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:22:24.884739    5014 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-814000" ...
	I0925 12:22:24.892729    5014 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:22:24.892807    5014 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50480-:22,hostfwd=tcp::50481-:2376,hostname=stopped-upgrade-814000 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/disk.qcow2
	I0925 12:22:24.939237    5014 main.go:141] libmachine: STDOUT: 
	I0925 12:22:24.939268    5014 main.go:141] libmachine: STDERR: 
	I0925 12:22:24.939275    5014 main.go:141] libmachine: Waiting for VM to start (ssh -p 50480 docker@127.0.0.1)...
	I0925 12:22:28.071584    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:22:33.073809    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:22:33.074006    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:22:33.085870    4893 logs.go:276] 2 containers: [b00200b3d354 a6b9bffba162]
	I0925 12:22:33.085959    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:22:33.098345    4893 logs.go:276] 2 containers: [b2d5ac54fa16 d84c769abee1]
	I0925 12:22:33.098441    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:22:33.112457    4893 logs.go:276] 1 containers: [51aa10e3317b]
	I0925 12:22:33.112533    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:22:33.123170    4893 logs.go:276] 2 containers: [f89f353ba7ff bfc63bd4a8f0]
	I0925 12:22:33.123262    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:22:33.133878    4893 logs.go:276] 1 containers: [da185f1cc653]
	I0925 12:22:33.133962    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:22:33.144480    4893 logs.go:276] 2 containers: [14ba24e33617 acf1d589a549]
	I0925 12:22:33.144576    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:22:33.154916    4893 logs.go:276] 0 containers: []
	W0925 12:22:33.154929    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:22:33.155011    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:22:33.166151    4893 logs.go:276] 1 containers: [797676f920e0]
	I0925 12:22:33.166169    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:22:33.166174    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:22:33.202187    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:22:33.202280    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:22:33.202752    4893 logs.go:123] Gathering logs for kube-scheduler [bfc63bd4a8f0] ...
	I0925 12:22:33.202757    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfc63bd4a8f0"
	I0925 12:22:33.218076    4893 logs.go:123] Gathering logs for kube-proxy [da185f1cc653] ...
	I0925 12:22:33.218087    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da185f1cc653"
	I0925 12:22:33.238802    4893 logs.go:123] Gathering logs for kube-controller-manager [14ba24e33617] ...
	I0925 12:22:33.238814    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ba24e33617"
	I0925 12:22:33.256488    4893 logs.go:123] Gathering logs for storage-provisioner [797676f920e0] ...
	I0925 12:22:33.256498    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 797676f920e0"
	I0925 12:22:33.268246    4893 logs.go:123] Gathering logs for coredns [51aa10e3317b] ...
	I0925 12:22:33.268259    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51aa10e3317b"
	I0925 12:22:33.279684    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:22:33.279695    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:22:33.283969    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:22:33.283975    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:22:33.319168    4893 logs.go:123] Gathering logs for kube-apiserver [a6b9bffba162] ...
	I0925 12:22:33.319180    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6b9bffba162"
	I0925 12:22:33.338560    4893 logs.go:123] Gathering logs for etcd [b2d5ac54fa16] ...
	I0925 12:22:33.338575    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d5ac54fa16"
	I0925 12:22:33.352850    4893 logs.go:123] Gathering logs for kube-scheduler [f89f353ba7ff] ...
	I0925 12:22:33.352865    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f89f353ba7ff"
	I0925 12:22:33.366228    4893 logs.go:123] Gathering logs for kube-controller-manager [acf1d589a549] ...
	I0925 12:22:33.366239    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf1d589a549"
	I0925 12:22:33.380094    4893 logs.go:123] Gathering logs for kube-apiserver [b00200b3d354] ...
	I0925 12:22:33.380103    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b00200b3d354"
	I0925 12:22:33.394349    4893 logs.go:123] Gathering logs for etcd [d84c769abee1] ...
	I0925 12:22:33.394362    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d84c769abee1"
	I0925 12:22:33.409611    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:22:33.409624    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:22:33.435068    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:22:33.435075    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:22:33.447129    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:22:33.447140    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:22:33.447168    4893 out.go:270] X Problems detected in kubelet:
	W0925 12:22:33.447173    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:22:33.447176    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:22:33.447180    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:22:33.447182    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:22:43.451296    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:22:44.998183    5014 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/config.json ...
	I0925 12:22:44.999113    5014 machine.go:93] provisionDockerMachine start ...
	I0925 12:22:44.999341    5014 main.go:141] libmachine: Using SSH client type: native
	I0925 12:22:44.999837    5014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bd1c00] 0x102bd4440 <nil>  [] 0s} localhost 50480 <nil> <nil>}
	I0925 12:22:44.999851    5014 main.go:141] libmachine: About to run SSH command:
	hostname
	I0925 12:22:45.101847    5014 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0925 12:22:45.101897    5014 buildroot.go:166] provisioning hostname "stopped-upgrade-814000"
	I0925 12:22:45.102076    5014 main.go:141] libmachine: Using SSH client type: native
	I0925 12:22:45.102408    5014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bd1c00] 0x102bd4440 <nil>  [] 0s} localhost 50480 <nil> <nil>}
	I0925 12:22:45.102425    5014 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-814000 && echo "stopped-upgrade-814000" | sudo tee /etc/hostname
	I0925 12:22:45.199541    5014 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-814000
	
	I0925 12:22:45.199652    5014 main.go:141] libmachine: Using SSH client type: native
	I0925 12:22:45.199855    5014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bd1c00] 0x102bd4440 <nil>  [] 0s} localhost 50480 <nil> <nil>}
	I0925 12:22:45.199869    5014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-814000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-814000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-814000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 12:22:45.282196    5014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 12:22:45.282215    5014 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19681-1412/.minikube CaCertPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19681-1412/.minikube}
	I0925 12:22:45.282227    5014 buildroot.go:174] setting up certificates
	I0925 12:22:45.282235    5014 provision.go:84] configureAuth start
	I0925 12:22:45.282244    5014 provision.go:143] copyHostCerts
	I0925 12:22:45.282330    5014 exec_runner.go:144] found /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.pem, removing ...
	I0925 12:22:45.282342    5014 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.pem
	I0925 12:22:45.282472    5014 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.pem (1082 bytes)
	I0925 12:22:45.282697    5014 exec_runner.go:144] found /Users/jenkins/minikube-integration/19681-1412/.minikube/cert.pem, removing ...
	I0925 12:22:45.282703    5014 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19681-1412/.minikube/cert.pem
	I0925 12:22:45.282772    5014 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19681-1412/.minikube/cert.pem (1123 bytes)
	I0925 12:22:45.282915    5014 exec_runner.go:144] found /Users/jenkins/minikube-integration/19681-1412/.minikube/key.pem, removing ...
	I0925 12:22:45.282920    5014 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19681-1412/.minikube/key.pem
	I0925 12:22:45.282987    5014 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19681-1412/.minikube/key.pem (1675 bytes)
	I0925 12:22:45.283102    5014 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-814000 san=[127.0.0.1 localhost minikube stopped-upgrade-814000]
	I0925 12:22:45.406731    5014 provision.go:177] copyRemoteCerts
	I0925 12:22:45.406773    5014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 12:22:45.406781    5014 sshutil.go:53] new ssh client: &{IP:localhost Port:50480 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/id_rsa Username:docker}
	I0925 12:22:45.445878    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0925 12:22:45.453109    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0925 12:22:45.459920    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0925 12:22:45.466446    5014 provision.go:87] duration metric: took 184.204958ms to configureAuth
	I0925 12:22:45.466454    5014 buildroot.go:189] setting minikube options for container-runtime
	I0925 12:22:45.466554    5014 config.go:182] Loaded profile config "stopped-upgrade-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:22:45.466597    5014 main.go:141] libmachine: Using SSH client type: native
	I0925 12:22:45.466687    5014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bd1c00] 0x102bd4440 <nil>  [] 0s} localhost 50480 <nil> <nil>}
	I0925 12:22:45.466691    5014 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 12:22:45.543477    5014 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 12:22:45.543488    5014 buildroot.go:70] root file system type: tmpfs
	I0925 12:22:45.543545    5014 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 12:22:45.543610    5014 main.go:141] libmachine: Using SSH client type: native
	I0925 12:22:45.543731    5014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bd1c00] 0x102bd4440 <nil>  [] 0s} localhost 50480 <nil> <nil>}
	I0925 12:22:45.543766    5014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 12:22:45.621343    5014 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 12:22:45.621405    5014 main.go:141] libmachine: Using SSH client type: native
	I0925 12:22:45.621528    5014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bd1c00] 0x102bd4440 <nil>  [] 0s} localhost 50480 <nil> <nil>}
	I0925 12:22:45.621538    5014 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 12:22:45.995561    5014 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0925 12:22:45.995575    5014 machine.go:96] duration metric: took 996.470709ms to provisionDockerMachine
	I0925 12:22:45.995582    5014 start.go:293] postStartSetup for "stopped-upgrade-814000" (driver="qemu2")
	I0925 12:22:45.995599    5014 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 12:22:45.995664    5014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 12:22:45.995673    5014 sshutil.go:53] new ssh client: &{IP:localhost Port:50480 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/id_rsa Username:docker}
	I0925 12:22:46.036424    5014 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 12:22:46.037757    5014 info.go:137] Remote host: Buildroot 2021.02.12
	I0925 12:22:46.037764    5014 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19681-1412/.minikube/addons for local assets ...
	I0925 12:22:46.037849    5014 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19681-1412/.minikube/files for local assets ...
	I0925 12:22:46.037980    5014 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19681-1412/.minikube/files/etc/ssl/certs/19342.pem -> 19342.pem in /etc/ssl/certs
	I0925 12:22:46.038119    5014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0925 12:22:46.041133    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/files/etc/ssl/certs/19342.pem --> /etc/ssl/certs/19342.pem (1708 bytes)
	I0925 12:22:46.047758    5014 start.go:296] duration metric: took 52.171625ms for postStartSetup
	I0925 12:22:46.047772    5014 fix.go:56] duration metric: took 21.167787792s for fixHost
	I0925 12:22:46.047808    5014 main.go:141] libmachine: Using SSH client type: native
	I0925 12:22:46.047919    5014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bd1c00] 0x102bd4440 <nil>  [] 0s} localhost 50480 <nil> <nil>}
	I0925 12:22:46.047925    5014 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0925 12:22:46.119744    5014 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727292166.409018338
	
	I0925 12:22:46.119751    5014 fix.go:216] guest clock: 1727292166.409018338
	I0925 12:22:46.119755    5014 fix.go:229] Guest: 2024-09-25 12:22:46.409018338 -0700 PDT Remote: 2024-09-25 12:22:46.047774 -0700 PDT m=+21.284178960 (delta=361.244338ms)
	I0925 12:22:46.119767    5014 fix.go:200] guest clock delta is within tolerance: 361.244338ms
	I0925 12:22:46.119770    5014 start.go:83] releasing machines lock for "stopped-upgrade-814000", held for 21.239796083s
	I0925 12:22:46.119839    5014 ssh_runner.go:195] Run: cat /version.json
	I0925 12:22:46.119853    5014 sshutil.go:53] new ssh client: &{IP:localhost Port:50480 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/id_rsa Username:docker}
	I0925 12:22:46.119839    5014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 12:22:46.119885    5014 sshutil.go:53] new ssh client: &{IP:localhost Port:50480 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/id_rsa Username:docker}
	W0925 12:22:46.120416    5014 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50480: connect: connection refused
	I0925 12:22:46.120438    5014 retry.go:31] will retry after 361.099596ms: dial tcp [::1]:50480: connect: connection refused
	W0925 12:22:46.156470    5014 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0925 12:22:46.156530    5014 ssh_runner.go:195] Run: systemctl --version
	I0925 12:22:46.158460    5014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 12:22:46.160253    5014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 12:22:46.160290    5014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0925 12:22:46.163090    5014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0925 12:22:46.167605    5014 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 12:22:46.167613    5014 start.go:495] detecting cgroup driver to use...
	I0925 12:22:46.167689    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 12:22:46.174696    5014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0925 12:22:46.178073    5014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 12:22:46.181347    5014 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 12:22:46.181376    5014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 12:22:46.184398    5014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 12:22:46.187198    5014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 12:22:46.190300    5014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 12:22:46.193587    5014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 12:22:46.197059    5014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 12:22:46.200703    5014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0925 12:22:46.203782    5014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0925 12:22:46.206820    5014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 12:22:46.209744    5014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 12:22:46.212576    5014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:22:46.295336    5014 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 12:22:46.301298    5014 start.go:495] detecting cgroup driver to use...
	I0925 12:22:46.301377    5014 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 12:22:46.306980    5014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 12:22:46.311681    5014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 12:22:46.318131    5014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 12:22:46.323087    5014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 12:22:46.327764    5014 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0925 12:22:46.368791    5014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 12:22:46.373514    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 12:22:46.378755    5014 ssh_runner.go:195] Run: which cri-dockerd
	I0925 12:22:46.380119    5014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 12:22:46.382659    5014 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0925 12:22:46.387992    5014 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 12:22:46.466026    5014 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 12:22:46.540112    5014 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 12:22:46.540175    5014 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0925 12:22:46.545219    5014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:22:46.623213    5014 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 12:22:47.776622    5014 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.153412792s)
	I0925 12:22:47.776684    5014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0925 12:22:47.783535    5014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0925 12:22:47.788244    5014 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0925 12:22:47.866697    5014 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 12:22:47.937080    5014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:22:48.015618    5014 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0925 12:22:48.021365    5014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0925 12:22:48.025619    5014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:22:48.104893    5014 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0925 12:22:48.142864    5014 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0925 12:22:48.142951    5014 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0925 12:22:48.145930    5014 start.go:563] Will wait 60s for crictl version
	I0925 12:22:48.145992    5014 ssh_runner.go:195] Run: which crictl
	I0925 12:22:48.147452    5014 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 12:22:48.161930    5014 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0925 12:22:48.162021    5014 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 12:22:48.180983    5014 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 12:22:48.202340    5014 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0925 12:22:48.202424    5014 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0925 12:22:48.204026    5014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 12:22:48.207670    5014 kubeadm.go:883] updating cluster {Name:stopped-upgrade-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50513 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0925 12:22:48.207716    5014 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0925 12:22:48.207776    5014 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 12:22:48.219030    5014 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0925 12:22:48.219039    5014 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0925 12:22:48.219099    5014 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 12:22:48.222197    5014 ssh_runner.go:195] Run: which lz4
	I0925 12:22:48.223476    5014 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0925 12:22:48.224788    5014 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0925 12:22:48.224798    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0925 12:22:49.199398    5014 docker.go:649] duration metric: took 975.977417ms to copy over tarball
	I0925 12:22:49.199467    5014 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0925 12:22:48.452242    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:22:48.452351    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:22:48.464427    4893 logs.go:276] 2 containers: [b00200b3d354 a6b9bffba162]
	I0925 12:22:48.464519    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:22:48.479751    4893 logs.go:276] 2 containers: [b2d5ac54fa16 d84c769abee1]
	I0925 12:22:48.479842    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:22:48.491971    4893 logs.go:276] 1 containers: [51aa10e3317b]
	I0925 12:22:48.492059    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:22:48.504523    4893 logs.go:276] 2 containers: [f89f353ba7ff bfc63bd4a8f0]
	I0925 12:22:48.504610    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:22:48.517149    4893 logs.go:276] 1 containers: [da185f1cc653]
	I0925 12:22:48.517247    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:22:48.531252    4893 logs.go:276] 2 containers: [14ba24e33617 acf1d589a549]
	I0925 12:22:48.531348    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:22:48.542729    4893 logs.go:276] 0 containers: []
	W0925 12:22:48.542743    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:22:48.542823    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:22:48.555648    4893 logs.go:276] 1 containers: [797676f920e0]
	I0925 12:22:48.555668    4893 logs.go:123] Gathering logs for etcd [d84c769abee1] ...
	I0925 12:22:48.555673    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d84c769abee1"
	I0925 12:22:48.572683    4893 logs.go:123] Gathering logs for etcd [b2d5ac54fa16] ...
	I0925 12:22:48.572695    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d5ac54fa16"
	I0925 12:22:48.588459    4893 logs.go:123] Gathering logs for kube-apiserver [b00200b3d354] ...
	I0925 12:22:48.588472    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b00200b3d354"
	I0925 12:22:48.604701    4893 logs.go:123] Gathering logs for kube-scheduler [f89f353ba7ff] ...
	I0925 12:22:48.604719    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f89f353ba7ff"
	I0925 12:22:48.617419    4893 logs.go:123] Gathering logs for storage-provisioner [797676f920e0] ...
	I0925 12:22:48.617432    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 797676f920e0"
	I0925 12:22:48.630418    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:22:48.630432    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:22:48.669572    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:22:48.669669    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:22:48.670162    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:22:48.670170    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:22:48.708924    4893 logs.go:123] Gathering logs for coredns [51aa10e3317b] ...
	I0925 12:22:48.708938    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51aa10e3317b"
	I0925 12:22:48.721918    4893 logs.go:123] Gathering logs for kube-scheduler [bfc63bd4a8f0] ...
	I0925 12:22:48.721932    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfc63bd4a8f0"
	I0925 12:22:48.737684    4893 logs.go:123] Gathering logs for kube-controller-manager [14ba24e33617] ...
	I0925 12:22:48.737696    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ba24e33617"
	I0925 12:22:48.760369    4893 logs.go:123] Gathering logs for kube-controller-manager [acf1d589a549] ...
	I0925 12:22:48.760382    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf1d589a549"
	I0925 12:22:48.775715    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:22:48.775726    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:22:48.789862    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:22:48.789875    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:22:48.794995    4893 logs.go:123] Gathering logs for kube-proxy [da185f1cc653] ...
	I0925 12:22:48.795009    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da185f1cc653"
	I0925 12:22:48.808634    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:22:48.808646    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:22:48.834457    4893 logs.go:123] Gathering logs for kube-apiserver [a6b9bffba162] ...
	I0925 12:22:48.834474    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6b9bffba162"
	I0925 12:22:48.855957    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:22:48.855969    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:22:48.856000    4893 out.go:270] X Problems detected in kubelet:
	W0925 12:22:48.856005    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:22:48.856008    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:22:48.856011    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:22:48.856014    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:22:50.359251    5014 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.159791416s)
	I0925 12:22:50.359266    5014 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0925 12:22:50.374767    5014 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 12:22:50.378302    5014 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0925 12:22:50.383391    5014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:22:50.463264    5014 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 12:22:52.043980    5014 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.580719375s)
	I0925 12:22:52.044100    5014 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 12:22:52.062708    5014 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0925 12:22:52.062724    5014 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0925 12:22:52.062729    5014 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0925 12:22:52.066967    5014 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:22:52.068484    5014 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0925 12:22:52.070608    5014 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:22:52.070674    5014 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0925 12:22:52.072906    5014 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0925 12:22:52.073005    5014 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0925 12:22:52.074874    5014 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0925 12:22:52.074930    5014 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0925 12:22:52.076083    5014 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0925 12:22:52.076234    5014 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0925 12:22:52.077392    5014 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0925 12:22:52.077393    5014 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0925 12:22:52.078371    5014 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0925 12:22:52.078517    5014 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0925 12:22:52.079297    5014 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0925 12:22:52.080196    5014 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0925 12:22:52.505232    5014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0925 12:22:52.511908    5014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0925 12:22:52.519816    5014 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0925 12:22:52.519847    5014 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0925 12:22:52.519920    5014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0925 12:22:52.523655    5014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0925 12:22:52.524031    5014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0925 12:22:52.544506    5014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0925 12:22:52.549311    5014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0925 12:22:52.557108    5014 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0925 12:22:52.557155    5014 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0925 12:22:52.557157    5014 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0925 12:22:52.557170    5014 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0925 12:22:52.557170    5014 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0925 12:22:52.557195    5014 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0925 12:22:52.557229    5014 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0925 12:22:52.557231    5014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0925 12:22:52.557270    5014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0925 12:22:52.557289    5014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0925 12:22:52.560533    5014 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0925 12:22:52.560556    5014 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0925 12:22:52.560616    5014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0925 12:22:52.581483    5014 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0925 12:22:52.581503    5014 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0925 12:22:52.581565    5014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0925 12:22:52.589104    5014 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0925 12:22:52.589253    5014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0925 12:22:52.592750    5014 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0925 12:22:52.592800    5014 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0925 12:22:52.592815    5014 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0925 12:22:52.592848    5014 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0925 12:22:52.593811    5014 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0925 12:22:52.603098    5014 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0925 12:22:52.603233    5014 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0925 12:22:52.603431    5014 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0925 12:22:52.603449    5014 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0925 12:22:52.603491    5014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0925 12:22:52.604579    5014 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0925 12:22:52.604591    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0925 12:22:52.604617    5014 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0925 12:22:52.604629    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0925 12:22:52.624224    5014 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0925 12:22:52.624238    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0925 12:22:52.634943    5014 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0925 12:22:52.635075    5014 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0925 12:22:52.681528    5014 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0925 12:22:52.681557    5014 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0925 12:22:52.681584    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0925 12:22:52.759330    5014 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0925 12:22:52.759348    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0925 12:22:52.862400    5014 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0925 12:22:52.868513    5014 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0925 12:22:52.868647    5014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:22:52.909259    5014 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0925 12:22:52.909286    5014 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:22:52.909364    5014 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:22:52.939429    5014 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0925 12:22:52.939449    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0925 12:22:52.953418    5014 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0925 12:22:52.953560    5014 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0925 12:22:53.090292    5014 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0925 12:22:53.090311    5014 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0925 12:22:53.090342    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0925 12:22:53.118283    5014 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0925 12:22:53.118308    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0925 12:22:53.359403    5014 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0925 12:22:53.359439    5014 cache_images.go:92] duration metric: took 1.296726834s to LoadCachedImages
	W0925 12:22:53.359475    5014 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0925 12:22:53.359480    5014 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0925 12:22:53.359524    5014 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-814000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0925 12:22:53.359601    5014 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0925 12:22:53.372692    5014 cni.go:84] Creating CNI manager for ""
	I0925 12:22:53.372709    5014 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:22:53.372714    5014 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0925 12:22:53.372727    5014 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-814000 NodeName:stopped-upgrade-814000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 12:22:53.372790    5014 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-814000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 12:22:53.373166    5014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0925 12:22:53.376230    5014 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 12:22:53.376262    5014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 12:22:53.378800    5014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0925 12:22:53.383460    5014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 12:22:53.388090    5014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0925 12:22:53.393673    5014 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0925 12:22:53.394922    5014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 12:22:53.398627    5014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:22:53.478288    5014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0925 12:22:53.488467    5014 certs.go:68] Setting up /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000 for IP: 10.0.2.15
	I0925 12:22:53.488479    5014 certs.go:194] generating shared ca certs ...
	I0925 12:22:53.488488    5014 certs.go:226] acquiring lock for ca certs: {Name:mk58bb807ba332e9ca8b6e9b3a29d33fd7cd9838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:22:53.488671    5014 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.key
	I0925 12:22:53.488721    5014 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/proxy-client-ca.key
	I0925 12:22:53.488732    5014 certs.go:256] generating profile certs ...
	I0925 12:22:53.488811    5014 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/client.key
	I0925 12:22:53.488828    5014 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.key.ea672eda
	I0925 12:22:53.488836    5014 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.crt.ea672eda with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0925 12:22:53.615767    5014 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.crt.ea672eda ...
	I0925 12:22:53.615782    5014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.crt.ea672eda: {Name:mk60c98bb796f71eedc75ba92bb2d1bc236f9239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:22:53.616099    5014 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.key.ea672eda ...
	I0925 12:22:53.616106    5014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.key.ea672eda: {Name:mkb802a7e5feb6dffc6f31ee25ad7e0e4f562c1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:22:53.616633    5014 certs.go:381] copying /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.crt.ea672eda -> /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.crt
	I0925 12:22:53.617195    5014 certs.go:385] copying /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.key.ea672eda -> /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.key
	I0925 12:22:53.617367    5014 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/proxy-client.key
	I0925 12:22:53.617517    5014 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/1934.pem (1338 bytes)
	W0925 12:22:53.617548    5014 certs.go:480] ignoring /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/1934_empty.pem, impossibly tiny 0 bytes
	I0925 12:22:53.617555    5014 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca-key.pem (1679 bytes)
	I0925 12:22:53.617578    5014 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem (1082 bytes)
	I0925 12:22:53.617597    5014 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem (1123 bytes)
	I0925 12:22:53.617616    5014 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/key.pem (1675 bytes)
	I0925 12:22:53.617654    5014 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/files/etc/ssl/certs/19342.pem (1708 bytes)
	I0925 12:22:53.617976    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 12:22:53.625322    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0925 12:22:53.632922    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 12:22:53.639857    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0925 12:22:53.646471    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0925 12:22:53.653808    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0925 12:22:53.661186    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 12:22:53.668051    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0925 12:22:53.674637    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/files/etc/ssl/certs/19342.pem --> /usr/share/ca-certificates/19342.pem (1708 bytes)
	I0925 12:22:53.681619    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 12:22:53.688469    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/1934.pem --> /usr/share/ca-certificates/1934.pem (1338 bytes)
	I0925 12:22:53.695042    5014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 12:22:53.700249    5014 ssh_runner.go:195] Run: openssl version
	I0925 12:22:53.702027    5014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19342.pem && ln -fs /usr/share/ca-certificates/19342.pem /etc/ssl/certs/19342.pem"
	I0925 12:22:53.705275    5014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19342.pem
	I0925 12:22:53.706866    5014 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 25 18:45 /usr/share/ca-certificates/19342.pem
	I0925 12:22:53.706890    5014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19342.pem
	I0925 12:22:53.708733    5014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19342.pem /etc/ssl/certs/3ec20f2e.0"
	I0925 12:22:53.711593    5014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 12:22:53.714646    5014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 12:22:53.716180    5014 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 25 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0925 12:22:53.716204    5014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 12:22:53.717870    5014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 12:22:53.721166    5014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1934.pem && ln -fs /usr/share/ca-certificates/1934.pem /etc/ssl/certs/1934.pem"
	I0925 12:22:53.724077    5014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1934.pem
	I0925 12:22:53.725499    5014 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 25 18:45 /usr/share/ca-certificates/1934.pem
	I0925 12:22:53.725526    5014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1934.pem
	I0925 12:22:53.727434    5014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1934.pem /etc/ssl/certs/51391683.0"
	I0925 12:22:53.730626    5014 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0925 12:22:53.732177    5014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0925 12:22:53.734067    5014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0925 12:22:53.736168    5014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0925 12:22:53.738073    5014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0925 12:22:53.740170    5014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0925 12:22:53.741971    5014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0925 12:22:53.743881    5014 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50513 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0925 12:22:53.743957    5014 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 12:22:53.753913    5014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 12:22:53.757642    5014 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0925 12:22:53.757652    5014 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0925 12:22:53.757683    5014 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0925 12:22:53.761513    5014 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0925 12:22:53.761818    5014 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-814000" does not appear in /Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:22:53.761915    5014 kubeconfig.go:62] /Users/jenkins/minikube-integration/19681-1412/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-814000" cluster setting kubeconfig missing "stopped-upgrade-814000" context setting]
	I0925 12:22:53.762088    5014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/kubeconfig: {Name:mkc011f0309eba8a9546287478e16310d103c97e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:22:53.762529    5014 kapi.go:59] client config for stopped-upgrade-814000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/client.key", CAFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1041aa030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0925 12:22:53.762872    5014 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0925 12:22:53.766082    5014 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-814000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0925 12:22:53.766088    5014 kubeadm.go:1160] stopping kube-system containers ...
	I0925 12:22:53.766137    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 12:22:53.776817    5014 docker.go:483] Stopping containers: [f669dbb60847 85feec2130cf e18b578755b3 da6e61f7285b 68f667927419 7e4f0f83b4c3 59e96c68682d c62ebbe188b2]
	I0925 12:22:53.776904    5014 ssh_runner.go:195] Run: docker stop f669dbb60847 85feec2130cf e18b578755b3 da6e61f7285b 68f667927419 7e4f0f83b4c3 59e96c68682d c62ebbe188b2
	I0925 12:22:53.787427    5014 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0925 12:22:53.793142    5014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 12:22:53.795899    5014 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 12:22:53.795904    5014 kubeadm.go:157] found existing configuration files:
	
	I0925 12:22:53.795931    5014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/admin.conf
	I0925 12:22:53.798348    5014 kubeadm.go:163] "https://control-plane.minikube.internal:50513" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0925 12:22:53.798370    5014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0925 12:22:53.801421    5014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/kubelet.conf
	I0925 12:22:53.804110    5014 kubeadm.go:163] "https://control-plane.minikube.internal:50513" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0925 12:22:53.804136    5014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0925 12:22:53.806540    5014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/controller-manager.conf
	I0925 12:22:53.809638    5014 kubeadm.go:163] "https://control-plane.minikube.internal:50513" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0925 12:22:53.809662    5014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0925 12:22:53.812522    5014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/scheduler.conf
	I0925 12:22:53.814873    5014 kubeadm.go:163] "https://control-plane.minikube.internal:50513" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0925 12:22:53.814903    5014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0925 12:22:53.817944    5014 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 12:22:53.821147    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 12:22:53.845274    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 12:22:54.366828    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0925 12:22:54.494522    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 12:22:54.525385    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0925 12:22:54.552839    5014 api_server.go:52] waiting for apiserver process to appear ...
	I0925 12:22:54.552925    5014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 12:22:55.053032    5014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 12:22:55.553609    5014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 12:22:55.557870    5014 api_server.go:72] duration metric: took 1.005054083s to wait for apiserver process to appear ...
	I0925 12:22:55.557880    5014 api_server.go:88] waiting for apiserver healthz status ...
	I0925 12:22:55.557890    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:22:58.859919    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:00.559991    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:00.560143    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:03.862208    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:03.862753    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:23:03.902011    4893 logs.go:276] 2 containers: [b00200b3d354 a6b9bffba162]
	I0925 12:23:03.902186    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:23:03.923906    4893 logs.go:276] 2 containers: [b2d5ac54fa16 d84c769abee1]
	I0925 12:23:03.924056    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:23:03.939077    4893 logs.go:276] 1 containers: [51aa10e3317b]
	I0925 12:23:03.939183    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:23:03.951369    4893 logs.go:276] 2 containers: [f89f353ba7ff bfc63bd4a8f0]
	I0925 12:23:03.951456    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:23:03.961755    4893 logs.go:276] 1 containers: [da185f1cc653]
	I0925 12:23:03.961830    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:23:03.972765    4893 logs.go:276] 2 containers: [14ba24e33617 acf1d589a549]
	I0925 12:23:03.972838    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:23:03.987706    4893 logs.go:276] 0 containers: []
	W0925 12:23:03.987717    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:23:03.987789    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:23:03.998472    4893 logs.go:276] 1 containers: [797676f920e0]
	I0925 12:23:03.998488    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:23:03.998494    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:23:04.010369    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:23:04.010383    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:23:04.045527    4893 logs.go:123] Gathering logs for etcd [b2d5ac54fa16] ...
	I0925 12:23:04.045540    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d5ac54fa16"
	I0925 12:23:04.067368    4893 logs.go:123] Gathering logs for kube-scheduler [f89f353ba7ff] ...
	I0925 12:23:04.067382    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f89f353ba7ff"
	I0925 12:23:04.079805    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:23:04.079821    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:23:04.116375    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:23:04.116469    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:23:04.116940    4893 logs.go:123] Gathering logs for kube-controller-manager [14ba24e33617] ...
	I0925 12:23:04.116946    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ba24e33617"
	I0925 12:23:04.135223    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:23:04.135237    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:23:04.158283    4893 logs.go:123] Gathering logs for kube-controller-manager [acf1d589a549] ...
	I0925 12:23:04.158291    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf1d589a549"
	I0925 12:23:04.172348    4893 logs.go:123] Gathering logs for storage-provisioner [797676f920e0] ...
	I0925 12:23:04.172360    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 797676f920e0"
	I0925 12:23:04.183851    4893 logs.go:123] Gathering logs for kube-apiserver [a6b9bffba162] ...
	I0925 12:23:04.183862    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6b9bffba162"
	I0925 12:23:04.204023    4893 logs.go:123] Gathering logs for etcd [d84c769abee1] ...
	I0925 12:23:04.204034    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d84c769abee1"
	I0925 12:23:04.218788    4893 logs.go:123] Gathering logs for kube-scheduler [bfc63bd4a8f0] ...
	I0925 12:23:04.218800    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfc63bd4a8f0"
	I0925 12:23:04.234648    4893 logs.go:123] Gathering logs for kube-proxy [da185f1cc653] ...
	I0925 12:23:04.234660    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da185f1cc653"
	I0925 12:23:04.246809    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:23:04.246824    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:23:04.251067    4893 logs.go:123] Gathering logs for kube-apiserver [b00200b3d354] ...
	I0925 12:23:04.251078    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b00200b3d354"
	I0925 12:23:04.264713    4893 logs.go:123] Gathering logs for coredns [51aa10e3317b] ...
	I0925 12:23:04.264729    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51aa10e3317b"
	I0925 12:23:04.280063    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:23:04.280075    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:23:04.280101    4893 out.go:270] X Problems detected in kubelet:
	W0925 12:23:04.280106    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:23:04.280109    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:23:04.280113    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:23:04.280117    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:23:05.560826    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:05.560866    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:10.561607    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:10.561675    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:14.282336    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:15.561930    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:15.561961    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:19.284485    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:19.284800    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:23:19.311676    4893 logs.go:276] 2 containers: [b00200b3d354 a6b9bffba162]
	I0925 12:23:19.311825    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:23:19.328782    4893 logs.go:276] 2 containers: [b2d5ac54fa16 d84c769abee1]
	I0925 12:23:19.328878    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:23:19.342083    4893 logs.go:276] 1 containers: [51aa10e3317b]
	I0925 12:23:19.342181    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:23:19.353505    4893 logs.go:276] 2 containers: [f89f353ba7ff bfc63bd4a8f0]
	I0925 12:23:19.353589    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:23:19.371965    4893 logs.go:276] 1 containers: [da185f1cc653]
	I0925 12:23:19.372047    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:23:19.382707    4893 logs.go:276] 2 containers: [14ba24e33617 acf1d589a549]
	I0925 12:23:19.382793    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:23:19.394110    4893 logs.go:276] 0 containers: []
	W0925 12:23:19.394122    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:23:19.394191    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:23:19.404792    4893 logs.go:276] 1 containers: [797676f920e0]
	I0925 12:23:19.404808    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:23:19.404813    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:23:19.409899    4893 logs.go:123] Gathering logs for etcd [d84c769abee1] ...
	I0925 12:23:19.409907    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d84c769abee1"
	I0925 12:23:19.426306    4893 logs.go:123] Gathering logs for kube-controller-manager [14ba24e33617] ...
	I0925 12:23:19.426315    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14ba24e33617"
	I0925 12:23:19.443469    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:23:19.443480    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:23:19.479426    4893 logs.go:123] Gathering logs for kube-apiserver [a6b9bffba162] ...
	I0925 12:23:19.479439    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6b9bffba162"
	I0925 12:23:19.498563    4893 logs.go:123] Gathering logs for etcd [b2d5ac54fa16] ...
	I0925 12:23:19.498574    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d5ac54fa16"
	I0925 12:23:19.513168    4893 logs.go:123] Gathering logs for coredns [51aa10e3317b] ...
	I0925 12:23:19.513189    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51aa10e3317b"
	I0925 12:23:19.531374    4893 logs.go:123] Gathering logs for kube-scheduler [bfc63bd4a8f0] ...
	I0925 12:23:19.531386    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfc63bd4a8f0"
	I0925 12:23:19.548374    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:23:19.548386    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:23:19.584020    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:23:19.584110    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:23:19.584574    4893 logs.go:123] Gathering logs for kube-proxy [da185f1cc653] ...
	I0925 12:23:19.584579    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da185f1cc653"
	I0925 12:23:19.595952    4893 logs.go:123] Gathering logs for storage-provisioner [797676f920e0] ...
	I0925 12:23:19.595962    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 797676f920e0"
	I0925 12:23:19.607434    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:23:19.607447    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:23:19.631067    4893 logs.go:123] Gathering logs for kube-apiserver [b00200b3d354] ...
	I0925 12:23:19.631075    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b00200b3d354"
	I0925 12:23:19.644536    4893 logs.go:123] Gathering logs for kube-scheduler [f89f353ba7ff] ...
	I0925 12:23:19.644547    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f89f353ba7ff"
	I0925 12:23:19.660151    4893 logs.go:123] Gathering logs for kube-controller-manager [acf1d589a549] ...
	I0925 12:23:19.660161    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf1d589a549"
	I0925 12:23:19.674006    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:23:19.674019    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:23:19.685793    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:23:19.685805    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:23:19.685834    4893 out.go:270] X Problems detected in kubelet:
	W0925 12:23:19.685839    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:23:19.685843    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:23:19.685846    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:23:19.685887    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:23:20.562704    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:20.562751    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:25.563929    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:25.563979    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:29.689534    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:30.564533    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:30.564559    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:34.691926    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:34.692005    4893 kubeadm.go:597] duration metric: took 4m7.961956834s to restartPrimaryControlPlane
	W0925 12:23:34.692089    4893 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0925 12:23:34.692128    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0925 12:23:35.666325    4893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 12:23:35.671399    4893 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 12:23:35.674203    4893 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 12:23:35.677158    4893 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 12:23:35.677164    4893 kubeadm.go:157] found existing configuration files:
	
	I0925 12:23:35.677189    4893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/admin.conf
	I0925 12:23:35.680229    4893 kubeadm.go:163] "https://control-plane.minikube.internal:50275" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0925 12:23:35.680261    4893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0925 12:23:35.682924    4893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/kubelet.conf
	I0925 12:23:35.685475    4893 kubeadm.go:163] "https://control-plane.minikube.internal:50275" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0925 12:23:35.685502    4893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0925 12:23:35.688598    4893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/controller-manager.conf
	I0925 12:23:35.691259    4893 kubeadm.go:163] "https://control-plane.minikube.internal:50275" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0925 12:23:35.691286    4893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0925 12:23:35.693811    4893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/scheduler.conf
	I0925 12:23:35.696760    4893 kubeadm.go:163] "https://control-plane.minikube.internal:50275" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50275 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0925 12:23:35.696785    4893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0925 12:23:35.699710    4893 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0925 12:23:35.716678    4893 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0925 12:23:35.716724    4893 kubeadm.go:310] [preflight] Running pre-flight checks
	I0925 12:23:35.762380    4893 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 12:23:35.762437    4893 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 12:23:35.762481    4893 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 12:23:35.813065    4893 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 12:23:35.817277    4893 out.go:235]   - Generating certificates and keys ...
	I0925 12:23:35.817309    4893 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0925 12:23:35.817341    4893 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0925 12:23:35.817388    4893 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0925 12:23:35.817423    4893 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0925 12:23:35.817477    4893 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0925 12:23:35.817507    4893 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0925 12:23:35.817553    4893 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0925 12:23:35.817588    4893 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0925 12:23:35.817630    4893 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0925 12:23:35.817671    4893 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0925 12:23:35.817694    4893 kubeadm.go:310] [certs] Using the existing "sa" key
	I0925 12:23:35.817724    4893 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 12:23:35.918260    4893 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 12:23:35.989165    4893 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 12:23:36.083759    4893 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 12:23:36.137035    4893 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 12:23:36.167863    4893 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 12:23:36.168269    4893 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 12:23:36.168323    4893 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0925 12:23:36.242981    4893 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 12:23:36.247479    4893 out.go:235]   - Booting up control plane ...
	I0925 12:23:36.247532    4893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 12:23:36.247568    4893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 12:23:36.247606    4893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 12:23:36.247648    4893 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 12:23:36.247727    4893 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 12:23:35.565987    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:35.566008    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:40.247786    4893 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001918 seconds
	I0925 12:23:40.247921    4893 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 12:23:40.251705    4893 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 12:23:40.767495    4893 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 12:23:40.767931    4893 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-796000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0925 12:23:41.272593    4893 kubeadm.go:310] [bootstrap-token] Using token: lqziip.t5ibeo2co01a4zx2
	I0925 12:23:41.277900    4893 out.go:235]   - Configuring RBAC rules ...
	I0925 12:23:41.277964    4893 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 12:23:41.278015    4893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 12:23:41.284695    4893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 12:23:41.285678    4893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 12:23:41.286626    4893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 12:23:41.287552    4893 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 12:23:41.291279    4893 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 12:23:41.468167    4893 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0925 12:23:41.677328    4893 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0925 12:23:41.677805    4893 kubeadm.go:310] 
	I0925 12:23:41.677838    4893 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0925 12:23:41.677841    4893 kubeadm.go:310] 
	I0925 12:23:41.677891    4893 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0925 12:23:41.677896    4893 kubeadm.go:310] 
	I0925 12:23:41.677909    4893 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0925 12:23:41.677938    4893 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 12:23:41.677962    4893 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 12:23:41.677964    4893 kubeadm.go:310] 
	I0925 12:23:41.677993    4893 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0925 12:23:41.677995    4893 kubeadm.go:310] 
	I0925 12:23:41.678017    4893 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0925 12:23:41.678019    4893 kubeadm.go:310] 
	I0925 12:23:41.678048    4893 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0925 12:23:41.678088    4893 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 12:23:41.678128    4893 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 12:23:41.678130    4893 kubeadm.go:310] 
	I0925 12:23:41.678175    4893 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 12:23:41.678214    4893 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0925 12:23:41.678217    4893 kubeadm.go:310] 
	I0925 12:23:41.678270    4893 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lqziip.t5ibeo2co01a4zx2 \
	I0925 12:23:41.678325    4893 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e51346daa4df67057de8045209492e1d5416aabfe1ee2597d0ef678584899cc1 \
	I0925 12:23:41.678338    4893 kubeadm.go:310] 	--control-plane 
	I0925 12:23:41.678341    4893 kubeadm.go:310] 
	I0925 12:23:41.678400    4893 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0925 12:23:41.678405    4893 kubeadm.go:310] 
	I0925 12:23:41.678448    4893 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lqziip.t5ibeo2co01a4zx2 \
	I0925 12:23:41.678516    4893 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e51346daa4df67057de8045209492e1d5416aabfe1ee2597d0ef678584899cc1 
	I0925 12:23:41.678575    4893 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 12:23:41.678632    4893 cni.go:84] Creating CNI manager for ""
	I0925 12:23:41.678641    4893 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:23:41.681719    4893 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 12:23:41.689831    4893 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 12:23:41.692956    4893 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0925 12:23:41.698968    4893 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 12:23:41.699051    4893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 12:23:41.699051    4893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-796000 minikube.k8s.io/updated_at=2024_09_25T12_23_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=cb9e6220ecbd737c1d09ad9630c6f144f437664a minikube.k8s.io/name=running-upgrade-796000 minikube.k8s.io/primary=true
	I0925 12:23:41.739446    4893 kubeadm.go:1113] duration metric: took 40.444083ms to wait for elevateKubeSystemPrivileges
	I0925 12:23:41.739460    4893 ops.go:34] apiserver oom_adj: -16
	I0925 12:23:41.739544    4893 kubeadm.go:394] duration metric: took 4m15.023288792s to StartCluster
	I0925 12:23:41.739556    4893 settings.go:142] acquiring lock: {Name:mk3a21ccfd977fa63a309ae265edad20537229ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:23:41.739651    4893 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:23:41.740033    4893 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/kubeconfig: {Name:mkc011f0309eba8a9546287478e16310d103c97e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:23:41.740256    4893 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:23:41.740261    4893 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0925 12:23:41.740297    4893 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-796000"
	I0925 12:23:41.740306    4893 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-796000"
	W0925 12:23:41.740334    4893 addons.go:243] addon storage-provisioner should already be in state true
	I0925 12:23:41.740336    4893 config.go:182] Loaded profile config "running-upgrade-796000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:23:41.740347    4893 host.go:66] Checking if "running-upgrade-796000" exists ...
	I0925 12:23:41.740322    4893 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-796000"
	I0925 12:23:41.740358    4893 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-796000"
	I0925 12:23:41.741209    4893 kapi.go:59] client config for running-upgrade-796000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/running-upgrade-796000/client.key", CAFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10619a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0925 12:23:41.741330    4893 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-796000"
	W0925 12:23:41.741335    4893 addons.go:243] addon default-storageclass should already be in state true
	I0925 12:23:41.741342    4893 host.go:66] Checking if "running-upgrade-796000" exists ...
	I0925 12:23:41.744576    4893 out.go:177] * Verifying Kubernetes components...
	I0925 12:23:41.744874    4893 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 12:23:41.749247    4893 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 12:23:41.749253    4893 sshutil.go:53] new ssh client: &{IP:localhost Port:50243 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/running-upgrade-796000/id_rsa Username:docker}
	I0925 12:23:41.752778    4893 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:23:41.756745    4893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:23:41.760772    4893 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 12:23:41.760782    4893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 12:23:41.760789    4893 sshutil.go:53] new ssh client: &{IP:localhost Port:50243 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/running-upgrade-796000/id_rsa Username:docker}
	I0925 12:23:41.847259    4893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0925 12:23:41.852089    4893 api_server.go:52] waiting for apiserver process to appear ...
	I0925 12:23:41.852140    4893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 12:23:41.855987    4893 api_server.go:72] duration metric: took 115.722709ms to wait for apiserver process to appear ...
	I0925 12:23:41.855995    4893 api_server.go:88] waiting for apiserver healthz status ...
	I0925 12:23:41.856002    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:41.876964    4893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 12:23:41.901300    4893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 12:23:42.213346    4893 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0925 12:23:42.213358    4893 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0925 12:23:40.568035    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:40.568079    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:46.858018    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:46.858070    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:45.570359    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:45.570401    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:51.858437    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:51.858476    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:50.572035    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:50.572085    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:56.858843    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:56.858886    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:55.574266    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:55.574394    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:23:55.585179    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:23:55.585264    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:23:55.597849    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:23:55.597935    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:23:55.609075    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:23:55.609155    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:23:55.620042    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:23:55.620141    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:23:55.632339    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:23:55.632432    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:23:55.645885    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:23:55.645970    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:23:55.655946    5014 logs.go:276] 0 containers: []
	W0925 12:23:55.655956    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:23:55.656022    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:23:55.666904    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:23:55.666922    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:23:55.666928    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:23:55.686057    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:23:55.686073    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:23:55.697499    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:23:55.697510    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:23:55.711873    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:23:55.711887    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:23:55.725253    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:23:55.725266    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:23:55.770051    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:23:55.770067    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:23:55.785161    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:23:55.785172    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:23:55.801267    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:23:55.801300    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:23:55.840927    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:23:55.840944    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:23:55.845543    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:23:55.845555    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:23:55.864268    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:23:55.864282    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:23:55.877071    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:23:55.877084    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:23:55.903009    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:23:55.903035    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:23:55.915645    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:23:55.915661    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:23:56.002069    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:23:56.002081    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:23:56.026919    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:23:56.026937    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:23:56.039332    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:23:56.039343    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:23:58.558836    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:01.859640    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:01.859687    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:03.559538    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:03.559828    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:03.584070    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:24:03.584193    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:03.602838    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:24:03.602934    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:03.616669    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:24:03.616761    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:03.627894    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:24:03.627990    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:03.637937    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:24:03.638019    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:03.648505    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:24:03.648573    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:03.659157    5014 logs.go:276] 0 containers: []
	W0925 12:24:03.659170    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:03.659246    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:03.670369    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:24:03.670385    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:24:03.670391    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:24:03.685107    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:24:03.685117    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:24:03.696314    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:24:03.696326    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:24:03.717732    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:24:03.717742    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:24:03.729286    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:03.729299    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:24:03.755113    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:03.755123    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:24:03.793695    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:24:03.793702    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:24:03.807498    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:24:03.807509    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:24:03.822560    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:03.822570    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:03.826732    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:24:03.826741    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:24:03.842459    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:24:03.842471    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:24:03.855561    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:24:03.855570    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:24:03.867713    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:24:03.867724    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:24:03.906571    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:24:03.906580    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:24:03.920275    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:24:03.920290    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:24:03.931786    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:24:03.931796    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:03.943309    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:03.943318    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:06.860375    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:06.860406    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:06.483085    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:11.861444    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:11.861463    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0925 12:24:12.215094    4893 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0925 12:24:12.218423    4893 out.go:177] * Enabled addons: storage-provisioner
	I0925 12:24:12.226318    4893 addons.go:510] duration metric: took 30.486622416s for enable addons: enabled=[storage-provisioner]
	I0925 12:24:11.484210    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:11.484679    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:11.514836    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:24:11.514990    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:11.537494    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:24:11.537588    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:11.551137    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:24:11.551233    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:11.563168    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:24:11.563257    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:11.573860    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:24:11.573949    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:11.584760    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:24:11.584844    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:11.597282    5014 logs.go:276] 0 containers: []
	W0925 12:24:11.597295    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:11.597374    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:11.608187    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:24:11.608208    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:11.608214    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:11.612925    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:24:11.612932    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:24:11.624765    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:11.624776    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:24:11.649987    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:24:11.649995    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:24:11.667269    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:24:11.667280    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:24:11.679302    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:24:11.679317    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:24:11.691319    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:24:11.691331    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:24:11.705469    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:24:11.705483    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:24:11.743308    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:24:11.743321    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:24:11.757808    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:24:11.757822    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:24:11.772223    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:24:11.772237    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:24:11.784068    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:11.784079    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:24:11.822770    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:11.822778    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:11.856953    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:24:11.856964    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:24:11.871099    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:24:11.871111    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:24:11.888607    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:24:11.888617    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:11.900334    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:24:11.900346    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:24:14.415047    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:16.862685    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:16.862745    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:19.415930    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:19.416047    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:19.426801    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:24:19.426885    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:19.437990    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:24:19.438086    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:19.448310    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:24:19.448391    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:19.458806    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:24:19.458895    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:19.469562    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:24:19.469644    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:19.481464    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:24:19.481546    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:19.491642    5014 logs.go:276] 0 containers: []
	W0925 12:24:19.491659    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:19.491732    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:19.502567    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:24:19.502590    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:19.502595    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:19.506982    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:19.506991    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:19.546677    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:24:19.546687    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:19.559278    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:24:19.559294    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:24:19.598214    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:24:19.598230    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:24:19.609727    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:24:19.609739    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:24:19.621549    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:24:19.621563    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:24:19.636308    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:24:19.636319    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:24:19.648184    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:24:19.648196    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:24:19.659415    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:24:19.659425    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:24:19.673649    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:24:19.673660    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:24:19.687277    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:24:19.687287    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:24:19.700824    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:24:19.700834    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:24:19.718887    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:19.718898    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:24:19.757915    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:24:19.757926    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:24:19.777682    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:24:19.777696    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:24:21.864435    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:21.864533    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:19.790388    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:19.790400    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:24:22.316554    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:26.865978    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:26.866023    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:27.318842    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:27.319247    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:27.358793    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:24:27.358960    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:27.381025    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:24:27.381179    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:27.397053    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:24:27.397152    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:27.411078    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:24:27.411163    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:27.426405    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:24:27.426490    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:27.437607    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:24:27.437684    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:27.447603    5014 logs.go:276] 0 containers: []
	W0925 12:24:27.447621    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:27.447708    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:27.458646    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:24:27.458665    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:27.458671    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:27.463537    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:27.463546    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:27.499128    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:24:27.499138    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:24:27.511355    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:24:27.511367    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:24:27.526711    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:24:27.526726    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:24:27.540485    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:24:27.540498    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:24:27.554032    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:27.554042    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:24:27.579074    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:27.579082    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:24:27.617935    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:24:27.617946    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:24:27.632346    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:24:27.632356    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:24:27.649997    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:24:27.650013    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:24:27.687218    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:24:27.687229    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:24:27.701149    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:24:27.701162    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:24:27.712982    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:24:27.712995    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:24:27.724366    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:24:27.724376    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:24:27.738309    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:24:27.738322    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:24:27.749642    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:24:27.749655    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:31.868196    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:31.868229    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:30.263418    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:36.868437    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:36.868480    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:35.265636    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:35.265890    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:35.286400    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:24:35.286533    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:35.301021    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:24:35.301114    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:35.313305    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:24:35.313390    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:35.328593    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:24:35.328674    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:35.339562    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:24:35.339644    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:35.350745    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:24:35.350814    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:35.361314    5014 logs.go:276] 0 containers: []
	W0925 12:24:35.361326    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:35.361395    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:35.371678    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:24:35.371696    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:35.371701    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:24:35.396840    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:24:35.396852    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:24:35.440011    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:24:35.440022    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:24:35.452257    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:24:35.452267    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:24:35.466267    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:24:35.466278    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:24:35.477605    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:24:35.477616    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:24:35.491403    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:24:35.491413    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:24:35.505195    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:24:35.505205    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:24:35.516912    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:24:35.516921    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:24:35.528205    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:24:35.528216    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:24:35.539329    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:24:35.539344    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:35.551486    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:35.551502    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:24:35.590533    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:24:35.590543    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:24:35.608497    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:24:35.608508    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:24:35.620325    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:24:35.620335    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:24:35.639168    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:35.639178    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:35.643933    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:35.643941    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:38.180537    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:41.870596    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:41.870723    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:41.881468    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:24:41.881566    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:41.891908    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:24:41.891996    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:41.902336    4893 logs.go:276] 2 containers: [2cf271d59fa5 578e7ca35890]
	I0925 12:24:41.902409    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:41.912871    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:24:41.912942    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:41.923193    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:24:41.923283    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:41.933880    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:24:41.933958    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:41.943769    4893 logs.go:276] 0 containers: []
	W0925 12:24:41.943779    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:41.943838    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:41.953980    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:24:41.953997    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:41.954003    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:41.958666    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:24:41.958673    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:24:41.972927    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:24:41.972937    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:24:41.987712    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:24:41.987723    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:24:42.002200    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:24:42.002211    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:24:42.014166    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:42.014181    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:24:42.037606    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:42.037613    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:24:42.055670    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:24:42.055761    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:24:42.071800    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:42.071804    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:42.138045    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:24:42.138056    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:24:42.151833    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:24:42.151843    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:24:42.163601    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:24:42.163612    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:24:42.175798    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:24:42.175808    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:24:42.197274    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:24:42.197282    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:42.208586    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:24:42.208598    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:24:42.208625    4893 out.go:270] X Problems detected in kubelet:
	W0925 12:24:42.208632    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:24:42.208636    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:24:42.208640    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:24:42.208643    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:24:43.182780    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:43.183000    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:43.202579    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:24:43.202689    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:43.216575    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:24:43.216670    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:43.234895    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:24:43.234981    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:43.245160    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:24:43.245251    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:43.255573    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:24:43.255651    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:43.266749    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:24:43.266829    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:43.277317    5014 logs.go:276] 0 containers: []
	W0925 12:24:43.277330    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:43.277399    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:43.287501    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:24:43.287519    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:24:43.287524    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:24:43.305744    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:24:43.305758    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:43.318333    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:43.318347    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:24:43.356328    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:43.356335    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:43.393270    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:24:43.393288    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:24:43.407399    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:24:43.407413    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:24:43.455143    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:24:43.455154    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:24:43.467208    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:24:43.467218    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:24:43.481773    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:43.481785    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:24:43.505025    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:43.505033    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:43.508883    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:24:43.508892    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:24:43.524204    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:24:43.524215    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:24:43.537756    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:24:43.537767    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:24:43.552972    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:24:43.552984    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:24:43.564950    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:24:43.564963    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:24:43.576920    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:24:43.576930    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:24:43.597685    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:24:43.597697    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:24:46.112369    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:52.211833    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:51.114750    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:51.115168    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:51.148160    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:24:51.148315    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:51.171086    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:24:51.171183    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:51.184341    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:24:51.184433    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:51.202525    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:24:51.202608    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:51.213630    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:24:51.213718    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:51.224181    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:24:51.224260    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:51.234149    5014 logs.go:276] 0 containers: []
	W0925 12:24:51.234159    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:51.234225    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:51.244851    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:24:51.244868    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:24:51.244873    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:24:51.282697    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:24:51.282708    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:24:51.299109    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:24:51.299120    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:24:51.311039    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:51.311055    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:51.315597    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:51.315606    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:51.351843    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:24:51.351859    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:24:51.367084    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:24:51.367094    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:51.380225    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:51.380239    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:24:51.421097    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:24:51.421109    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:24:51.432000    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:24:51.432015    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:24:51.446990    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:24:51.447004    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:24:51.458344    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:24:51.458356    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:24:51.472126    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:24:51.472136    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:24:51.484125    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:24:51.484137    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:24:51.501610    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:24:51.501626    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:24:51.516512    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:24:51.516525    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:24:51.528278    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:51.528293    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:24:54.053680    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:57.214425    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:57.214745    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:57.240621    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:24:57.240773    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:57.258080    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:24:57.258192    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:57.271619    4893 logs.go:276] 2 containers: [2cf271d59fa5 578e7ca35890]
	I0925 12:24:57.271715    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:57.283033    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:24:57.283109    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:57.292943    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:24:57.293038    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:57.310600    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:24:57.310686    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:57.320329    4893 logs.go:276] 0 containers: []
	W0925 12:24:57.320341    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:57.320415    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:57.330443    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:24:57.330458    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:24:57.330464    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:24:57.347625    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:57.347634    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:57.387294    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:24:57.387309    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:24:57.405383    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:24:57.405393    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:24:57.424924    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:24:57.424935    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:24:57.439348    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:24:57.439361    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:24:57.451084    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:24:57.451094    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:24:57.470875    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:24:57.470885    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:24:57.485039    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:24:57.485050    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:24:57.496291    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:24:57.496308    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:57.507852    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:57.507862    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:24:57.526730    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:24:57.526824    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:24:57.542739    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:57.542750    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:57.547709    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:57.547719    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:24:57.572129    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:24:57.572141    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:24:57.572167    4893 out.go:270] X Problems detected in kubelet:
	W0925 12:24:57.572192    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:24:57.572197    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:24:57.572201    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:24:57.572205    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:24:59.055941    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:59.056335    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:59.085892    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:24:59.086049    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:59.105271    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:24:59.105392    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:59.119535    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:24:59.119614    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:59.131531    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:24:59.131611    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:59.142194    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:24:59.142299    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:59.153121    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:24:59.153196    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:59.163901    5014 logs.go:276] 0 containers: []
	W0925 12:24:59.163915    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:59.163987    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:59.174484    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:24:59.174503    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:24:59.174508    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:24:59.188939    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:24:59.188954    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:24:59.203688    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:24:59.203700    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:24:59.218797    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:24:59.218808    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:24:59.230629    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:24:59.230641    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:59.243189    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:59.243202    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:24:59.281999    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:24:59.282011    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:24:59.295652    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:24:59.295665    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:24:59.307167    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:59.307182    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:59.311295    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:59.311303    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:59.345850    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:24:59.345860    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:24:59.383583    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:24:59.383597    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:24:59.398439    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:24:59.398455    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:24:59.410275    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:24:59.410288    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:24:59.428120    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:24:59.428130    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:24:59.440220    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:24:59.440230    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:24:59.452175    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:59.452189    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:01.975811    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:07.576105    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:06.978329    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:06.978530    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:06.998508    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:25:06.998611    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:07.012513    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:25:07.012606    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:07.023862    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:25:07.023947    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:07.034359    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:25:07.034434    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:07.045244    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:25:07.045327    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:07.055344    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:25:07.055431    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:07.069599    5014 logs.go:276] 0 containers: []
	W0925 12:25:07.069612    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:07.069686    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:07.080905    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:25:07.080922    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:25:07.080927    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:25:07.092111    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:25:07.092121    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:07.103843    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:07.103853    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:07.108100    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:25:07.108106    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:25:07.122657    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:25:07.122670    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:25:07.134710    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:25:07.134724    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:25:07.147116    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:25:07.147126    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:25:07.158765    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:25:07.158786    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:25:07.176336    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:25:07.176352    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:25:07.188559    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:07.188572    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:25:07.224465    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:25:07.224473    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:25:07.238404    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:25:07.238417    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:25:07.281162    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:25:07.281173    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:25:07.294869    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:25:07.294880    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:25:07.306388    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:07.306398    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:07.331514    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:07.331521    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:07.366374    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:25:07.366385    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:25:12.578261    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:12.578549    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:12.611117    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:25:12.611244    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:12.627061    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:25:12.627152    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:12.640405    4893 logs.go:276] 2 containers: [2cf271d59fa5 578e7ca35890]
	I0925 12:25:12.640493    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:12.651561    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:25:12.651646    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:12.662774    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:25:12.662873    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:12.673293    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:25:12.673380    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:12.683425    4893 logs.go:276] 0 containers: []
	W0925 12:25:12.683438    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:12.683513    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:12.693776    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:25:12.693792    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:25:12.693797    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:25:12.707518    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:25:12.707531    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:25:12.722357    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:25:12.722366    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:12.734602    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:12.734618    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:12.771012    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:25:12.771023    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:25:12.785024    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:25:12.785035    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:25:12.798985    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:25:12.798996    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:25:12.812044    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:25:12.812057    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:25:12.829996    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:25:12.830008    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:25:09.886601    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:12.841711    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:12.841722    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:12.865029    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:12.865036    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:25:12.883248    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:25:12.883340    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:25:12.899264    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:12.899272    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:12.903903    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:25:12.903913    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:25:12.918995    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:25:12.919004    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:25:12.919030    4893 out.go:270] X Problems detected in kubelet:
	W0925 12:25:12.919036    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:25:12.919040    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:25:12.919043    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:25:12.919046    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:25:14.888816    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:14.889021    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:14.906654    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:25:14.906756    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:14.918570    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:25:14.918640    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:14.929169    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:25:14.929249    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:14.941217    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:25:14.941297    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:14.951661    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:25:14.951743    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:14.962874    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:25:14.962951    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:14.973420    5014 logs.go:276] 0 containers: []
	W0925 12:25:14.973435    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:14.973506    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:14.983921    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:25:14.983940    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:14.983945    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:15.019169    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:25:15.019185    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:25:15.036727    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:25:15.036737    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:25:15.049681    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:25:15.049691    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:25:15.086912    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:25:15.086925    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:25:15.098799    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:25:15.098810    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:25:15.110201    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:15.110210    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:15.133869    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:15.133876    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:25:15.173293    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:15.173304    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:15.177555    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:25:15.177561    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:25:15.190798    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:25:15.190808    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:25:15.205090    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:25:15.205100    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:25:15.216839    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:25:15.216851    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:25:15.228007    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:25:15.228017    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:25:15.242021    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:25:15.242032    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:25:15.272126    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:25:15.272141    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:25:15.287922    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:25:15.287933    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:17.801929    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:22.804208    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:22.804530    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:22.827632    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:25:22.827760    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:22.843876    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:25:22.843973    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:22.857062    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:25:22.857146    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:22.868493    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:25:22.868584    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:22.880620    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:25:22.880702    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:22.891020    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:25:22.891104    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:22.900793    5014 logs.go:276] 0 containers: []
	W0925 12:25:22.900805    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:22.900872    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:22.910872    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:25:22.910890    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:25:22.910895    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:25:22.925083    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:25:22.925094    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:25:22.940262    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:25:22.940275    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:25:22.959695    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:25:22.959712    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:22.972264    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:25:22.972276    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:25:22.984746    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:22.984757    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:22.988683    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:25:22.988689    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:25:23.027872    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:25:23.027889    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:25:23.040094    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:25:23.040111    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:25:23.052421    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:23.052432    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:23.075427    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:23.075436    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:25:23.111659    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:25:23.111666    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:25:23.126249    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:25:23.126264    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:25:23.142491    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:25:23.142502    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:25:23.154076    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:23.154089    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:23.190152    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:25:23.190164    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:25:23.204486    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:25:23.204496    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:25:22.921592    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:25.718253    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:27.923816    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:27.924333    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:27.957289    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:25:27.957457    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:27.977297    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:25:27.977415    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:27.991671    4893 logs.go:276] 2 containers: [2cf271d59fa5 578e7ca35890]
	I0925 12:25:27.991753    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:28.003969    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:25:28.004043    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:28.015210    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:25:28.015304    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:28.026211    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:25:28.026297    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:28.037031    4893 logs.go:276] 0 containers: []
	W0925 12:25:28.037044    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:28.037112    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:28.048156    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:25:28.048172    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:25:28.048178    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:25:28.060485    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:28.060498    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:25:28.080780    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:25:28.080871    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:25:28.096456    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:28.096464    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:28.101181    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:25:28.101188    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:25:28.115851    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:25:28.115862    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:25:28.129819    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:25:28.129831    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:25:28.148160    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:25:28.148173    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:25:28.163865    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:25:28.163881    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:25:28.182008    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:28.182021    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:28.206445    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:28.206452    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:28.241312    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:25:28.241323    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:25:28.253518    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:25:28.253534    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:25:28.266392    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:25:28.266408    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:28.278179    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:25:28.278190    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:25:28.278218    4893 out.go:270] X Problems detected in kubelet:
	W0925 12:25:28.278223    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:25:28.278259    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:25:28.278275    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:25:28.278286    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:25:30.720482    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:30.720642    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:30.737129    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:25:30.737236    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:30.749699    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:25:30.749786    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:30.760632    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:25:30.760703    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:30.771798    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:25:30.771884    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:30.781947    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:25:30.782034    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:30.792189    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:25:30.792268    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:30.803191    5014 logs.go:276] 0 containers: []
	W0925 12:25:30.803200    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:30.803263    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:30.813580    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:25:30.813598    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:25:30.813603    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:25:30.824584    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:25:30.824596    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:25:30.836011    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:25:30.836021    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:25:30.849053    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:30.849063    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:25:30.887645    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:25:30.887661    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:25:30.904046    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:25:30.904059    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:25:30.921672    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:25:30.921684    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:25:30.934221    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:25:30.934231    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:30.946185    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:30.946196    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:30.981678    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:25:30.981694    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:25:30.998517    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:25:30.998528    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:25:31.016816    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:31.016832    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:31.020741    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:25:31.020746    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:25:31.033995    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:25:31.034004    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:25:31.045532    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:31.045541    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:31.068668    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:25:31.068675    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:25:31.106044    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:25:31.106056    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:25:33.622737    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:38.625032    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:38.625375    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:38.670082    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:25:38.670224    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:38.690328    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:25:38.690422    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:38.702515    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:25:38.702602    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:38.714848    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:25:38.714922    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:38.725357    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:25:38.725441    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:38.735862    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:25:38.735939    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:38.746061    5014 logs.go:276] 0 containers: []
	W0925 12:25:38.746074    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:38.746153    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:38.757268    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:25:38.757287    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:25:38.757292    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:25:38.769078    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:25:38.769090    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:25:38.785060    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:38.785070    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:38.821942    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:25:38.821950    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:25:38.836083    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:25:38.836092    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:25:38.847536    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:25:38.847546    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:25:38.859655    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:25:38.859665    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:25:38.877294    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:25:38.877306    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:25:38.891471    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:25:38.891481    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:25:38.906977    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:25:38.906993    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:38.919009    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:38.919019    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:38.923352    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:25:38.923361    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:25:38.937605    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:25:38.937614    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:25:38.953645    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:25:38.953659    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:25:38.965481    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:38.965492    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:38.989800    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:38.989810    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:25:39.026854    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:25:39.026866    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:25:38.282291    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:41.572509    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:43.284679    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:43.284809    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:43.296005    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:25:43.296100    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:43.306424    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:25:43.306508    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:43.316919    4893 logs.go:276] 2 containers: [2cf271d59fa5 578e7ca35890]
	I0925 12:25:43.317006    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:43.327623    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:25:43.327698    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:43.338108    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:25:43.338191    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:43.348145    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:25:43.348218    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:43.359191    4893 logs.go:276] 0 containers: []
	W0925 12:25:43.359201    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:43.359279    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:43.369581    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:25:43.369597    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:43.369604    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:43.374217    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:25:43.374225    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:25:43.388217    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:25:43.388226    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:25:43.404784    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:25:43.404794    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:25:43.416495    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:25:43.416506    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:25:43.434204    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:43.434214    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:43.458648    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:25:43.458659    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:43.469754    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:43.469767    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:25:43.489019    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:25:43.489110    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:25:43.504966    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:43.504973    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:43.540700    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:25:43.540714    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:25:43.555385    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:25:43.555399    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:25:43.567422    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:25:43.567437    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:25:43.579278    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:25:43.579294    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:25:43.590761    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:25:43.590771    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:25:43.590797    4893 out.go:270] X Problems detected in kubelet:
	W0925 12:25:43.590805    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:25:43.590809    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:25:43.590815    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:25:43.590818    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:25:46.574872    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:46.575490    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:46.613717    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:25:46.613874    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:46.634224    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:25:46.634346    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:46.648344    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:25:46.648424    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:46.660858    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:25:46.660951    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:46.671640    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:25:46.671732    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:46.685731    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:25:46.685815    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:46.696108    5014 logs.go:276] 0 containers: []
	W0925 12:25:46.696129    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:46.696199    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:46.706762    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:25:46.706780    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:46.706785    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:25:46.744102    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:25:46.744123    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:25:46.782709    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:25:46.782725    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:25:46.795018    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:25:46.795031    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:25:46.807410    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:25:46.807424    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:25:46.819193    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:46.819204    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:46.854227    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:25:46.854241    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:25:46.868379    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:25:46.868394    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:25:46.882837    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:25:46.882849    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:25:46.894466    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:46.894477    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:46.917207    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:25:46.917214    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:25:46.931277    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:25:46.931292    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:25:46.946527    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:25:46.946540    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:25:46.961487    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:46.961502    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:46.966705    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:25:46.966711    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:25:46.984468    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:25:46.984483    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:25:46.997575    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:25:46.997588    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:49.511286    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:54.513533    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:54.513706    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:54.528917    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:25:54.529020    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:54.542010    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:25:54.542098    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:54.552070    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:25:54.552142    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:54.562985    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:25:54.563078    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:54.573640    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:25:54.573722    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:54.584518    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:25:54.584597    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:54.595066    5014 logs.go:276] 0 containers: []
	W0925 12:25:54.595084    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:54.595162    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:54.605904    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:25:54.605926    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:25:54.605931    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:25:54.618779    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:25:54.618790    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:54.630683    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:54.630694    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:25:54.668085    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:54.668095    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:54.672144    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:54.672153    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:54.706527    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:25:54.706541    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:25:54.718558    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:25:54.718569    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:25:54.732012    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:25:54.732027    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:25:54.747276    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:25:54.747284    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:25:54.762225    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:25:54.762236    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:25:54.774811    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:25:54.774822    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:25:53.594782    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:54.818500    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:25:54.818514    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:25:54.830406    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:54.830419    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:54.854745    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:25:54.854756    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:25:54.868886    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:25:54.868901    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:25:54.880228    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:25:54.880240    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:25:54.897957    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:25:54.897968    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:25:57.410492    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:58.597156    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:58.597501    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:58.628879    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:25:58.629034    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:58.648343    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:25:58.648452    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:58.668877    4893 logs.go:276] 4 containers: [7e4c37f8e257 5a5096007204 2cf271d59fa5 578e7ca35890]
	I0925 12:25:58.668975    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:58.679872    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:25:58.679964    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:58.690470    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:25:58.690556    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:58.701248    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:25:58.701326    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:58.716724    4893 logs.go:276] 0 containers: []
	W0925 12:25:58.716736    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:58.716807    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:58.728072    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:25:58.728088    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:58.728093    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:25:58.748006    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:25:58.748098    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:25:58.763762    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:25:58.763771    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:25:58.779236    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:25:58.779247    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:58.791054    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:25:58.791063    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:25:58.803999    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:25:58.804015    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:25:58.815716    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:58.815726    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:58.842200    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:25:58.842208    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:25:58.856418    4893 logs.go:123] Gathering logs for coredns [5a5096007204] ...
	I0925 12:25:58.856430    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a5096007204"
	I0925 12:25:58.872953    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:25:58.872966    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:25:58.890858    4893 logs.go:123] Gathering logs for coredns [7e4c37f8e257] ...
	I0925 12:25:58.890869    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4c37f8e257"
	I0925 12:25:58.903393    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:25:58.903406    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:25:58.918961    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:25:58.918972    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:25:58.934350    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:58.934360    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:58.938628    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:58.938636    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:58.979242    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:25:58.979257    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:25:58.993987    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:25:58.993996    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:25:58.994022    4893 out.go:270] X Problems detected in kubelet:
	W0925 12:25:58.994027    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:25:58.994030    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:25:58.994034    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:25:58.994037    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:26:02.412785    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:02.413092    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:02.444835    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:26:02.444947    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:02.459093    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:26:02.459191    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:02.473560    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:26:02.473644    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:02.483681    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:26:02.483752    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:02.493639    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:26:02.493724    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:02.512688    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:26:02.512773    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:02.524062    5014 logs.go:276] 0 containers: []
	W0925 12:26:02.524073    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:02.524138    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:02.534725    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:26:02.534746    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:26:02.534751    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:26:02.555347    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:26:02.555355    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:26:02.566938    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:26:02.566948    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:26:02.579603    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:26:02.579616    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:02.591459    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:02.591470    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:26:02.627991    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:26:02.628002    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:26:02.649411    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:26:02.649422    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:26:02.693716    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:26:02.693727    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:26:02.707754    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:26:02.707764    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:26:02.723203    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:26:02.723217    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:26:02.734699    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:26:02.734710    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:26:02.746198    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:26:02.746208    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:26:02.757766    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:02.757780    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:02.762611    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:26:02.762616    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:26:02.781058    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:26:02.781068    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:26:02.792294    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:02.792306    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:02.817454    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:02.817463    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:05.354509    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:08.997980    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:10.356671    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:10.356880    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:10.375273    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:26:10.375390    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:10.389420    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:26:10.389522    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:10.401555    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:26:10.401636    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:10.413789    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:26:10.413865    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:10.424978    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:26:10.425055    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:10.435220    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:26:10.435303    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:10.445458    5014 logs.go:276] 0 containers: []
	W0925 12:26:10.445474    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:10.445553    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:10.456354    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:26:10.456372    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:26:10.456378    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:26:10.470822    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:26:10.470833    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:26:10.482749    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:26:10.482759    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:26:10.500243    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:26:10.500254    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:26:10.538127    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:26:10.538139    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:26:10.552294    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:26:10.552306    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:26:10.563452    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:26:10.563463    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:26:10.578445    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:10.578459    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:26:10.617396    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:10.617406    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:10.622009    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:26:10.622016    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:26:10.636619    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:26:10.636628    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:26:10.649575    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:10.649585    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:10.673983    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:26:10.673991    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:10.685708    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:10.685719    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:10.719277    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:26:10.719289    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:26:10.730921    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:26:10.730934    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:26:10.745992    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:26:10.746002    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:26:13.262098    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:13.998378    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:13.998652    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:14.021562    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:26:14.021690    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:14.036876    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:26:14.036977    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:14.049433    4893 logs.go:276] 4 containers: [7e4c37f8e257 5a5096007204 2cf271d59fa5 578e7ca35890]
	I0925 12:26:14.049517    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:14.060568    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:26:14.060654    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:14.071304    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:26:14.071386    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:14.081568    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:26:14.081654    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:14.093562    4893 logs.go:276] 0 containers: []
	W0925 12:26:14.093574    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:14.093644    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:14.104138    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:26:14.104156    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:14.104162    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:14.141128    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:26:14.141139    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:26:14.157183    4893 logs.go:123] Gathering logs for coredns [7e4c37f8e257] ...
	I0925 12:26:14.157193    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4c37f8e257"
	I0925 12:26:14.169572    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:26:14.169586    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:26:14.181528    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:14.181538    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:14.206283    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:26:14.206295    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:14.218445    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:14.218456    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:26:14.239145    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:26:14.239237    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:26:14.255055    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:14.255064    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:14.261353    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:26:14.261362    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:26:14.276901    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:26:14.276911    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:26:14.295073    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:26:14.295082    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:26:14.306487    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:26:14.306496    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:26:14.324436    4893 logs.go:123] Gathering logs for coredns [5a5096007204] ...
	I0925 12:26:14.324446    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a5096007204"
	I0925 12:26:14.342340    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:26:14.342350    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:26:14.353508    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:26:14.353523    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:26:14.364875    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:26:14.364885    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:26:14.364912    4893 out.go:270] X Problems detected in kubelet:
	W0925 12:26:14.364916    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:26:14.364919    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:26:14.364926    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:26:14.364929    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:26:18.264342    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:18.264548    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:18.291450    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:26:18.291550    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:18.306395    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:26:18.306481    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:18.317398    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:26:18.317486    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:18.328250    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:26:18.328342    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:18.338491    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:26:18.338604    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:18.348882    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:26:18.348952    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:18.359703    5014 logs.go:276] 0 containers: []
	W0925 12:26:18.359717    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:18.359790    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:18.371225    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:26:18.371242    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:26:18.371248    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:26:18.394168    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:18.394186    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:18.419064    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:18.419076    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:26:18.457834    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:18.457843    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:18.494422    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:26:18.494435    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:26:18.531861    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:26:18.531872    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:26:18.547297    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:26:18.547308    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:26:18.558647    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:26:18.558660    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:18.572428    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:18.572443    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:18.576732    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:26:18.576739    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:26:18.595387    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:26:18.595402    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:26:18.609781    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:26:18.609791    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:26:18.624774    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:26:18.624784    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:26:18.636284    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:26:18.636293    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:26:18.648896    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:26:18.648910    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:26:18.661663    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:26:18.661676    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:26:18.676617    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:26:18.676629    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:26:21.190052    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:24.368903    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:26.192402    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:26.192740    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:26.220110    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:26:26.220252    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:26.237597    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:26:26.237706    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:26.249997    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:26:26.250072    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:26.260565    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:26:26.260634    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:26.270897    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:26:26.270982    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:26.281087    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:26:26.281173    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:26.290846    5014 logs.go:276] 0 containers: []
	W0925 12:26:26.290861    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:26.290929    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:26.301104    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:26:26.301123    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:26:26.301128    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:26:26.316170    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:26:26.316180    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:26:26.327851    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:26:26.327860    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:26:26.343205    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:26:26.343214    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:26:26.354327    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:26.354338    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:26.388588    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:26:26.388600    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:26:26.403708    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:26:26.403719    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:26:26.417327    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:26:26.417339    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:26:26.434742    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:26:26.434752    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:26.447769    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:26.447781    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:26:26.485131    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:26:26.485150    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:26:26.499011    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:26:26.499021    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:26:26.517313    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:26:26.517327    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:26:26.529159    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:26.529175    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:26.553181    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:26.553195    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:26.557983    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:26:26.557989    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:26:26.596073    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:26:26.596089    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:26:29.109687    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:29.371574    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:29.371788    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:29.398097    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:26:29.398222    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:29.412796    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:26:29.412884    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:29.425327    4893 logs.go:276] 4 containers: [7e4c37f8e257 5a5096007204 2cf271d59fa5 578e7ca35890]
	I0925 12:26:29.425416    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:29.437611    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:26:29.437695    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:29.448324    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:26:29.448403    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:29.459892    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:26:29.459977    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:29.477324    4893 logs.go:276] 0 containers: []
	W0925 12:26:29.477338    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:29.477405    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:29.487935    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:26:29.487952    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:29.487958    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:29.512258    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:26:29.512268    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:29.524055    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:26:29.524066    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:26:29.539787    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:26:29.539797    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:26:29.558851    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:26:29.558863    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:26:29.570592    4893 logs.go:123] Gathering logs for coredns [7e4c37f8e257] ...
	I0925 12:26:29.570603    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4c37f8e257"
	I0925 12:26:29.582528    4893 logs.go:123] Gathering logs for coredns [5a5096007204] ...
	I0925 12:26:29.582540    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a5096007204"
	I0925 12:26:29.595156    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:26:29.595166    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:26:29.613385    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:29.613403    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:26:29.633666    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:26:29.633761    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:26:29.650714    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:26:29.650723    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:26:29.669359    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:26:29.669376    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:26:29.682572    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:26:29.682583    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:26:29.694180    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:29.694190    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:29.698439    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:29.698445    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:29.734382    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:26:29.734393    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:26:29.746489    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:26:29.746500    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:26:29.746531    4893 out.go:270] X Problems detected in kubelet:
	W0925 12:26:29.746535    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:26:29.746538    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:26:29.746543    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:26:29.746545    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:26:34.112069    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:34.112676    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:34.152209    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:26:34.152369    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:34.173873    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:26:34.173994    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:34.189126    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:26:34.189221    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:34.202085    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:26:34.202169    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:34.212672    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:26:34.212757    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:34.223588    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:26:34.223670    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:34.233693    5014 logs.go:276] 0 containers: []
	W0925 12:26:34.233708    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:34.233768    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:34.244356    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:26:34.244374    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:34.244380    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:34.279630    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:26:34.279639    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:26:34.317151    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:34.317163    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:26:34.356009    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:26:34.356018    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:26:34.370236    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:26:34.370251    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:26:34.382362    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:26:34.382378    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:26:34.398101    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:34.398113    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:34.402590    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:26:34.402597    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:26:34.421942    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:26:34.421953    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:26:34.440831    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:26:34.440841    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:26:34.456573    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:26:34.456583    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:26:34.468413    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:26:34.468421    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:26:34.487625    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:26:34.487636    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:26:34.499262    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:26:34.499272    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:26:34.516663    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:26:34.516674    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:26:34.529263    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:34.529273    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:34.551806    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:26:34.551814    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:37.064739    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:39.750568    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:42.065930    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:42.066404    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:42.099165    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:26:42.099318    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:42.122860    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:26:42.122957    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:42.136027    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:26:42.136112    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:42.147566    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:26:42.147652    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:42.158009    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:26:42.158123    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:42.169908    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:26:42.169998    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:42.180859    5014 logs.go:276] 0 containers: []
	W0925 12:26:42.180870    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:42.180940    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:42.191695    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:26:42.191711    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:42.191716    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:26:42.227937    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:26:42.227945    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:26:42.238917    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:26:42.238929    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:26:42.261475    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:26:42.261485    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:26:42.273827    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:42.273837    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:42.278246    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:26:42.278253    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:26:42.291982    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:26:42.291992    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:26:42.303107    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:42.303121    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:42.336591    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:26:42.336607    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:26:42.351576    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:26:42.351589    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:26:42.363766    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:26:42.363775    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:26:42.380572    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:42.380583    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:42.404149    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:26:42.404159    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:26:42.417944    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:26:42.417954    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:26:42.456001    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:26:42.456018    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:26:42.480604    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:26:42.480620    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:26:42.492939    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:26:42.492953    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:44.752799    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:44.752947    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:44.766528    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:26:44.766648    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:44.778276    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:26:44.778371    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:44.788861    4893 logs.go:276] 4 containers: [7e4c37f8e257 5a5096007204 2cf271d59fa5 578e7ca35890]
	I0925 12:26:44.788953    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:44.806752    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:26:44.806852    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:44.817510    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:26:44.817591    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:44.827549    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:26:44.827635    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:44.837905    4893 logs.go:276] 0 containers: []
	W0925 12:26:44.837915    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:44.837979    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:44.848007    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:26:44.848025    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:44.848030    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:44.852651    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:44.852657    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:44.891357    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:26:44.891371    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:26:44.906343    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:26:44.906354    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:44.918940    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:44.918950    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:26:44.937060    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:26:44.937151    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:26:44.953069    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:26:44.953077    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:26:44.966567    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:26:44.966577    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:26:44.984505    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:26:44.984516    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:26:44.995887    4893 logs.go:123] Gathering logs for coredns [5a5096007204] ...
	I0925 12:26:44.995898    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a5096007204"
	I0925 12:26:45.007874    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:26:45.007882    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:26:45.019963    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:45.019973    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:45.045081    4893 logs.go:123] Gathering logs for coredns [7e4c37f8e257] ...
	I0925 12:26:45.045089    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4c37f8e257"
	I0925 12:26:45.056632    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:26:45.056642    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:26:45.067692    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:26:45.067702    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:26:45.082250    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:26:45.082258    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:26:45.097727    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:26:45.097736    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:26:45.097763    4893 out.go:270] X Problems detected in kubelet:
	W0925 12:26:45.097767    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:26:45.097770    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:26:45.097775    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:26:45.097778    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:26:45.006571    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:50.008792    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:50.009349    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:50.048219    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:26:50.048370    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:50.066778    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:26:50.066891    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:50.085486    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:26:50.085584    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:50.097304    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:26:50.097394    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:50.107948    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:26:50.108027    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:50.118812    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:26:50.118885    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:50.129292    5014 logs.go:276] 0 containers: []
	W0925 12:26:50.129303    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:50.129366    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:50.140172    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:26:50.140191    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:26:50.140197    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:26:50.158106    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:50.158119    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:50.179786    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:50.179803    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:50.184086    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:50.184094    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:50.233715    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:26:50.233727    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:26:50.249274    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:26:50.249285    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:26:50.266104    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:26:50.266115    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:26:50.278425    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:26:50.278439    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:26:50.291192    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:26:50.291202    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:50.302999    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:26:50.303012    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:26:50.314319    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:26:50.314327    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:26:50.325789    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:50.325800    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:26:50.364304    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:26:50.364312    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:26:50.378832    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:26:50.378843    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:26:50.392915    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:26:50.392931    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:26:50.408062    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:26:50.408071    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:26:50.419097    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:26:50.419106    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:26:52.959039    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:55.100422    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:57.961741    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:57.961889    5014 kubeadm.go:597] duration metric: took 4m4.208761333s to restartPrimaryControlPlane
	W0925 12:26:57.962012    5014 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0925 12:26:57.962072    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0925 12:26:58.965487    5014 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.003419708s)
	I0925 12:26:58.965570    5014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 12:26:58.971054    5014 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 12:26:58.973922    5014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 12:26:58.976995    5014 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 12:26:58.977003    5014 kubeadm.go:157] found existing configuration files:
	
	I0925 12:26:58.977036    5014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/admin.conf
	I0925 12:26:58.980222    5014 kubeadm.go:163] "https://control-plane.minikube.internal:50513" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0925 12:26:58.980252    5014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0925 12:26:58.982938    5014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/kubelet.conf
	I0925 12:26:58.985374    5014 kubeadm.go:163] "https://control-plane.minikube.internal:50513" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0925 12:26:58.985406    5014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0925 12:26:58.988538    5014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/controller-manager.conf
	I0925 12:26:58.991848    5014 kubeadm.go:163] "https://control-plane.minikube.internal:50513" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0925 12:26:58.991876    5014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0925 12:26:58.994552    5014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/scheduler.conf
	I0925 12:26:58.997265    5014 kubeadm.go:163] "https://control-plane.minikube.internal:50513" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0925 12:26:58.997293    5014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0925 12:26:59.000657    5014 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0925 12:26:59.018457    5014 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0925 12:26:59.018492    5014 kubeadm.go:310] [preflight] Running pre-flight checks
	I0925 12:26:59.067477    5014 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 12:26:59.067531    5014 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 12:26:59.067576    5014 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 12:26:59.121979    5014 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 12:26:59.126133    5014 out.go:235]   - Generating certificates and keys ...
	I0925 12:26:59.126167    5014 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0925 12:26:59.126220    5014 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0925 12:26:59.126261    5014 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0925 12:26:59.126295    5014 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0925 12:26:59.126352    5014 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0925 12:26:59.126385    5014 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0925 12:26:59.126438    5014 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0925 12:26:59.126476    5014 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0925 12:26:59.126515    5014 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0925 12:26:59.126617    5014 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0925 12:26:59.126637    5014 kubeadm.go:310] [certs] Using the existing "sa" key
	I0925 12:26:59.126666    5014 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 12:26:59.382754    5014 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 12:26:59.547647    5014 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 12:26:59.741669    5014 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 12:26:59.857810    5014 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 12:26:59.886046    5014 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 12:26:59.886870    5014 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 12:26:59.886896    5014 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0925 12:26:59.981729    5014 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 12:27:00.102548    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:00.102661    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:27:00.113882    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:27:00.113967    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:27:00.124690    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:27:00.124767    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:27:00.137604    4893 logs.go:276] 4 containers: [7e4c37f8e257 5a5096007204 2cf271d59fa5 578e7ca35890]
	I0925 12:27:00.137692    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:27:00.147944    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:27:00.148032    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:27:00.158661    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:27:00.158740    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:27:00.168811    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:27:00.168903    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:27:00.181607    4893 logs.go:276] 0 containers: []
	W0925 12:27:00.181619    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:27:00.181692    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:27:00.192375    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:27:00.192394    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:27:00.192400    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:27:00.227872    4893 logs.go:123] Gathering logs for coredns [5a5096007204] ...
	I0925 12:27:00.227884    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a5096007204"
	I0925 12:27:00.253405    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:27:00.253416    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:27:00.276366    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:27:00.276382    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:27:00.289681    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:27:00.289696    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:27:00.313927    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:27:00.313940    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:27:00.328974    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:27:00.328987    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:27:00.340507    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:27:00.340519    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:27:00.358331    4893 logs.go:123] Gathering logs for coredns [7e4c37f8e257] ...
	I0925 12:27:00.358340    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4c37f8e257"
	I0925 12:27:00.369794    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:27:00.369810    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:27:00.381955    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:27:00.381964    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:27:00.402196    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:27:00.402215    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:27:00.414948    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:27:00.414964    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:27:00.426698    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:27:00.426708    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:27:00.446430    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:27:00.446522    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:27:00.462172    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:27:00.462179    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:27:00.466969    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:27:00.466976    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:27:00.466998    4893 out.go:270] X Problems detected in kubelet:
	W0925 12:27:00.467004    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:27:00.467007    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:27:00.467012    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:27:00.467014    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:26:59.984844    5014 out.go:235]   - Booting up control plane ...
	I0925 12:26:59.984894    5014 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 12:26:59.984942    5014 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 12:26:59.984988    5014 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 12:26:59.985029    5014 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 12:26:59.985133    5014 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 12:27:04.483858    5014 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502184 seconds
	I0925 12:27:04.483931    5014 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 12:27:04.488888    5014 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 12:27:05.010355    5014 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 12:27:05.010677    5014 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-814000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0925 12:27:05.514299    5014 kubeadm.go:310] [bootstrap-token] Using token: h640qa.l1d0pjuhrwb7q9j2
	I0925 12:27:05.517879    5014 out.go:235]   - Configuring RBAC rules ...
	I0925 12:27:05.517940    5014 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 12:27:05.525686    5014 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 12:27:05.527608    5014 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 12:27:05.528390    5014 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 12:27:05.529217    5014 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 12:27:05.530069    5014 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 12:27:05.532779    5014 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 12:27:05.706169    5014 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0925 12:27:05.927499    5014 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0925 12:27:05.928067    5014 kubeadm.go:310] 
	I0925 12:27:05.928101    5014 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0925 12:27:05.928106    5014 kubeadm.go:310] 
	I0925 12:27:05.928153    5014 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0925 12:27:05.928201    5014 kubeadm.go:310] 
	I0925 12:27:05.928252    5014 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0925 12:27:05.928281    5014 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 12:27:05.928303    5014 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 12:27:05.928306    5014 kubeadm.go:310] 
	I0925 12:27:05.928348    5014 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0925 12:27:05.928351    5014 kubeadm.go:310] 
	I0925 12:27:05.928375    5014 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0925 12:27:05.928382    5014 kubeadm.go:310] 
	I0925 12:27:05.928445    5014 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0925 12:27:05.928480    5014 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 12:27:05.928525    5014 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 12:27:05.928529    5014 kubeadm.go:310] 
	I0925 12:27:05.928567    5014 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 12:27:05.928625    5014 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0925 12:27:05.928630    5014 kubeadm.go:310] 
	I0925 12:27:05.928673    5014 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token h640qa.l1d0pjuhrwb7q9j2 \
	I0925 12:27:05.928728    5014 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e51346daa4df67057de8045209492e1d5416aabfe1ee2597d0ef678584899cc1 \
	I0925 12:27:05.928745    5014 kubeadm.go:310] 	--control-plane 
	I0925 12:27:05.928748    5014 kubeadm.go:310] 
	I0925 12:27:05.928800    5014 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0925 12:27:05.928809    5014 kubeadm.go:310] 
	I0925 12:27:05.928848    5014 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token h640qa.l1d0pjuhrwb7q9j2 \
	I0925 12:27:05.928908    5014 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e51346daa4df67057de8045209492e1d5416aabfe1ee2597d0ef678584899cc1 
	I0925 12:27:05.928994    5014 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 12:27:05.929004    5014 cni.go:84] Creating CNI manager for ""
	I0925 12:27:05.929013    5014 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:27:05.935030    5014 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 12:27:05.944252    5014 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 12:27:05.947435    5014 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0925 12:27:05.952092    5014 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 12:27:05.952139    5014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-814000 minikube.k8s.io/updated_at=2024_09_25T12_27_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=cb9e6220ecbd737c1d09ad9630c6f144f437664a minikube.k8s.io/name=stopped-upgrade-814000 minikube.k8s.io/primary=true
	I0925 12:27:05.952140    5014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 12:27:05.996174    5014 kubeadm.go:1113] duration metric: took 44.073625ms to wait for elevateKubeSystemPrivileges
	I0925 12:27:05.996192    5014 ops.go:34] apiserver oom_adj: -16
	I0925 12:27:05.996204    5014 kubeadm.go:394] duration metric: took 4m12.257006667s to StartCluster
	I0925 12:27:05.996214    5014 settings.go:142] acquiring lock: {Name:mk3a21ccfd977fa63a309ae265edad20537229ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:27:05.996304    5014 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:27:05.996739    5014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/kubeconfig: {Name:mkc011f0309eba8a9546287478e16310d103c97e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:27:05.996936    5014 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:27:05.996999    5014 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0925 12:27:05.997032    5014 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-814000"
	I0925 12:27:05.997043    5014 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-814000"
	I0925 12:27:05.997043    5014 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-814000"
	W0925 12:27:05.997047    5014 addons.go:243] addon storage-provisioner should already be in state true
	I0925 12:27:05.997051    5014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-814000"
	I0925 12:27:05.997059    5014 host.go:66] Checking if "stopped-upgrade-814000" exists ...
	I0925 12:27:05.997076    5014 config.go:182] Loaded profile config "stopped-upgrade-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:27:05.998076    5014 kapi.go:59] client config for stopped-upgrade-814000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/client.key", CAFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1041aa030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0925 12:27:05.998193    5014 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-814000"
	W0925 12:27:05.998197    5014 addons.go:243] addon default-storageclass should already be in state true
	I0925 12:27:05.998203    5014 host.go:66] Checking if "stopped-upgrade-814000" exists ...
	I0925 12:27:06.000972    5014 out.go:177] * Verifying Kubernetes components...
	I0925 12:27:06.001378    5014 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 12:27:06.005165    5014 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 12:27:06.005174    5014 sshutil.go:53] new ssh client: &{IP:localhost Port:50480 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/id_rsa Username:docker}
	I0925 12:27:06.009011    5014 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:27:06.011957    5014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:27:06.016071    5014 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 12:27:06.016079    5014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 12:27:06.016087    5014 sshutil.go:53] new ssh client: &{IP:localhost Port:50480 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/id_rsa Username:docker}
	I0925 12:27:06.097894    5014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0925 12:27:06.103165    5014 api_server.go:52] waiting for apiserver process to appear ...
	I0925 12:27:06.103217    5014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 12:27:06.107232    5014 api_server.go:72] duration metric: took 110.2875ms to wait for apiserver process to appear ...
	I0925 12:27:06.107240    5014 api_server.go:88] waiting for apiserver healthz status ...
	I0925 12:27:06.107247    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:06.122958    5014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 12:27:06.186999    5014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 12:27:06.504707    5014 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0925 12:27:06.504719    5014 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0925 12:27:10.470930    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:11.108180    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:11.108231    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:15.473107    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:15.473287    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:27:15.490548    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:27:15.490651    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:27:15.504984    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:27:15.505069    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:27:15.516037    4893 logs.go:276] 4 containers: [7e4c37f8e257 5a5096007204 2cf271d59fa5 578e7ca35890]
	I0925 12:27:15.516129    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:27:15.533688    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:27:15.533777    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:27:15.546279    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:27:15.546356    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:27:15.556798    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:27:15.556873    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:27:15.566883    4893 logs.go:276] 0 containers: []
	W0925 12:27:15.566902    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:27:15.566966    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:27:15.577977    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:27:15.578000    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:27:15.578005    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:27:15.590092    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:27:15.590104    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:27:15.605968    4893 logs.go:123] Gathering logs for coredns [5a5096007204] ...
	I0925 12:27:15.605978    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a5096007204"
	I0925 12:27:15.617930    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:27:15.617941    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:27:15.630607    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:27:15.630616    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:27:15.645905    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:27:15.645915    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:27:15.663755    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:27:15.663764    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:27:15.702299    4893 logs.go:123] Gathering logs for coredns [7e4c37f8e257] ...
	I0925 12:27:15.702309    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4c37f8e257"
	I0925 12:27:15.714397    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:27:15.714407    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:27:15.726265    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:27:15.726278    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:27:15.746604    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:27:15.746701    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:27:15.762661    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:27:15.762670    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:27:15.767276    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:27:15.767287    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:27:15.781567    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:27:15.781577    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:27:15.794117    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:27:15.794128    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:27:15.806625    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:27:15.806638    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:27:15.830301    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:27:15.830311    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:27:15.830336    4893 out.go:270] X Problems detected in kubelet:
	W0925 12:27:15.830340    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:27:15.830345    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:27:15.830348    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:27:15.830351    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:27:16.109152    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:16.109211    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:21.109432    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:21.109481    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:25.834305    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:26.109790    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:26.109835    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:30.836450    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:30.836591    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:27:30.857426    4893 logs.go:276] 1 containers: [5cb9b6d95558]
	I0925 12:27:30.857511    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:27:30.874698    4893 logs.go:276] 1 containers: [9ea79e8d93b8]
	I0925 12:27:30.874784    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:27:30.885522    4893 logs.go:276] 4 containers: [7e4c37f8e257 5a5096007204 2cf271d59fa5 578e7ca35890]
	I0925 12:27:30.885605    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:27:30.900393    4893 logs.go:276] 1 containers: [a7a133842232]
	I0925 12:27:30.900480    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:27:30.911851    4893 logs.go:276] 1 containers: [57558152c4b3]
	I0925 12:27:30.911935    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:27:30.923120    4893 logs.go:276] 1 containers: [a9a507e07152]
	I0925 12:27:30.923206    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:27:30.933484    4893 logs.go:276] 0 containers: []
	W0925 12:27:30.933497    4893 logs.go:278] No container was found matching "kindnet"
	I0925 12:27:30.933568    4893 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:27:30.944452    4893 logs.go:276] 1 containers: [d3b3239f3636]
	I0925 12:27:30.944468    4893 logs.go:123] Gathering logs for kubelet ...
	I0925 12:27:30.944473    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0925 12:27:30.962441    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:27:30.962534    4893 logs.go:138] Found kubelet problem: Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:27:30.978183    4893 logs.go:123] Gathering logs for dmesg ...
	I0925 12:27:30.978191    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:27:30.982917    4893 logs.go:123] Gathering logs for coredns [5a5096007204] ...
	I0925 12:27:30.982925    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a5096007204"
	I0925 12:27:30.994696    4893 logs.go:123] Gathering logs for kube-scheduler [a7a133842232] ...
	I0925 12:27:30.994707    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a133842232"
	I0925 12:27:31.010488    4893 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:27:31.010499    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:27:31.046251    4893 logs.go:123] Gathering logs for kube-apiserver [5cb9b6d95558] ...
	I0925 12:27:31.046266    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cb9b6d95558"
	I0925 12:27:31.060597    4893 logs.go:123] Gathering logs for etcd [9ea79e8d93b8] ...
	I0925 12:27:31.060609    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ea79e8d93b8"
	I0925 12:27:31.078874    4893 logs.go:123] Gathering logs for coredns [2cf271d59fa5] ...
	I0925 12:27:31.078883    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cf271d59fa5"
	I0925 12:27:31.091612    4893 logs.go:123] Gathering logs for kube-proxy [57558152c4b3] ...
	I0925 12:27:31.091624    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57558152c4b3"
	I0925 12:27:31.103032    4893 logs.go:123] Gathering logs for coredns [7e4c37f8e257] ...
	I0925 12:27:31.103043    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4c37f8e257"
	I0925 12:27:31.114612    4893 logs.go:123] Gathering logs for storage-provisioner [d3b3239f3636] ...
	I0925 12:27:31.114621    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b3239f3636"
	I0925 12:27:31.126012    4893 logs.go:123] Gathering logs for coredns [578e7ca35890] ...
	I0925 12:27:31.126022    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 578e7ca35890"
	I0925 12:27:31.137632    4893 logs.go:123] Gathering logs for kube-controller-manager [a9a507e07152] ...
	I0925 12:27:31.137646    4893 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a507e07152"
	I0925 12:27:31.155126    4893 logs.go:123] Gathering logs for Docker ...
	I0925 12:27:31.155136    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:27:31.180043    4893 logs.go:123] Gathering logs for container status ...
	I0925 12:27:31.180052    4893 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:27:31.191490    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:27:31.191499    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0925 12:27:31.191527    4893 out.go:270] X Problems detected in kubelet:
	W0925 12:27:31.191532    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: W0925 19:19:44.365627    3586 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	W0925 12:27:31.191535    4893 out.go:270]   Sep 25 19:19:44 running-upgrade-796000 kubelet[3586]: E0925 19:19:44.365647    3586 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-796000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-796000' and this object
	I0925 12:27:31.191539    4893 out.go:358] Setting ErrFile to fd 2...
	I0925 12:27:31.191543    4893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:27:31.110196    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:31.110229    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:36.110721    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:36.110755    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0925 12:27:36.506521    5014 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0925 12:27:36.510895    5014 out.go:177] * Enabled addons: storage-provisioner
	I0925 12:27:36.518715    5014 addons.go:510] duration metric: took 30.522328542s for enable addons: enabled=[storage-provisioner]
	I0925 12:27:41.195525    4893 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:41.111408    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:41.111457    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:46.197102    4893 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:46.201470    4893 out.go:201] 
	W0925 12:27:46.205391    4893 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0925 12:27:46.205398    4893 out.go:270] * 
	W0925 12:27:46.205835    4893 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:27:46.217397    4893 out.go:201] 
	I0925 12:27:46.112465    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:46.112503    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:51.113662    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:51.113703    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:56.115287    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:56.115319    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-09-25 19:18:50 UTC, ends at Wed 2024-09-25 19:28:02 UTC. --
	Sep 25 19:27:44 running-upgrade-796000 dockerd[2894]: time="2024-09-25T19:27:44.085192049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 19:27:44 running-upgrade-796000 dockerd[2894]: time="2024-09-25T19:27:44.085233465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 19:27:44 running-upgrade-796000 dockerd[2894]: time="2024-09-25T19:27:44.085244131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 19:27:44 running-upgrade-796000 dockerd[2894]: time="2024-09-25T19:27:44.085303547Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8a74299fffa63ce15deff0372892b838fefb8d5dc5d2399b26c85372f9d61ca4 pid=15285 runtime=io.containerd.runc.v2
	Sep 25 19:27:45 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:45Z" level=error msg="ContainerStats resp: {0x4000267840 linux}"
	Sep 25 19:27:45 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:45Z" level=error msg="ContainerStats resp: {0x4000267980 linux}"
	Sep 25 19:27:45 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:45Z" level=error msg="ContainerStats resp: {0x4000267b40 linux}"
	Sep 25 19:27:45 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:45Z" level=error msg="ContainerStats resp: {0x400024f880 linux}"
	Sep 25 19:27:45 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:45Z" level=error msg="ContainerStats resp: {0x400084ce80 linux}"
	Sep 25 19:27:45 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:45Z" level=error msg="ContainerStats resp: {0x400084cf40 linux}"
	Sep 25 19:27:45 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:45Z" level=error msg="ContainerStats resp: {0x400024fec0 linux}"
	Sep 25 19:27:46 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:46Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 25 19:27:51 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:51Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 25 19:27:55 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:55Z" level=error msg="ContainerStats resp: {0x4000757d40 linux}"
	Sep 25 19:27:55 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:55Z" level=error msg="ContainerStats resp: {0x400024fbc0 linux}"
	Sep 25 19:27:56 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:56Z" level=error msg="ContainerStats resp: {0x4000988b00 linux}"
	Sep 25 19:27:56 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:56Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 25 19:27:57 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:57Z" level=error msg="ContainerStats resp: {0x4000943600 linux}"
	Sep 25 19:27:57 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:57Z" level=error msg="ContainerStats resp: {0x4000989d00 linux}"
	Sep 25 19:27:57 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:57Z" level=error msg="ContainerStats resp: {0x4000890280 linux}"
	Sep 25 19:27:57 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:57Z" level=error msg="ContainerStats resp: {0x40004fcd80 linux}"
	Sep 25 19:27:57 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:57Z" level=error msg="ContainerStats resp: {0x4000890b80 linux}"
	Sep 25 19:27:57 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:57Z" level=error msg="ContainerStats resp: {0x40008911c0 linux}"
	Sep 25 19:27:57 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:27:57Z" level=error msg="ContainerStats resp: {0x4000891680 linux}"
	Sep 25 19:28:01 running-upgrade-796000 cri-dockerd[2735]: time="2024-09-25T19:28:01Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	8a74299fffa63       edaa71f2aee88       18 seconds ago      Running             coredns                   2                   1607fb2f591c2
	c5cce48f8f391       edaa71f2aee88       18 seconds ago      Running             coredns                   2                   8440c3381d028
	7e4c37f8e2571       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   1607fb2f591c2
	5a50960072042       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   8440c3381d028
	57558152c4b37       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   33de38fca4bad
	d3b3239f3636a       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   9cd13fd4ca305
	a9a507e071522       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   674020a2fad0f
	a7a1338422324       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   e800442ad00dd
	9ea79e8d93b85       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   8b023f22c3ae1
	5cb9b6d95558b       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   cf99022a1e501
	
	
	==> coredns [5a5096007204] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6874367620805325088.6497451826440646612. HINFO: read udp 10.244.0.2:43184->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6874367620805325088.6497451826440646612. HINFO: read udp 10.244.0.2:34588->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6874367620805325088.6497451826440646612. HINFO: read udp 10.244.0.2:50727->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6874367620805325088.6497451826440646612. HINFO: read udp 10.244.0.2:41949->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6874367620805325088.6497451826440646612. HINFO: read udp 10.244.0.2:55692->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6874367620805325088.6497451826440646612. HINFO: read udp 10.244.0.2:48626->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6874367620805325088.6497451826440646612. HINFO: read udp 10.244.0.2:54246->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6874367620805325088.6497451826440646612. HINFO: read udp 10.244.0.2:34078->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6874367620805325088.6497451826440646612. HINFO: read udp 10.244.0.2:46098->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6874367620805325088.6497451826440646612. HINFO: read udp 10.244.0.2:52426->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7e4c37f8e257] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2649357507348173290.2152057236181846788. HINFO: read udp 10.244.0.3:45382->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2649357507348173290.2152057236181846788. HINFO: read udp 10.244.0.3:40883->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2649357507348173290.2152057236181846788. HINFO: read udp 10.244.0.3:49547->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2649357507348173290.2152057236181846788. HINFO: read udp 10.244.0.3:47738->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2649357507348173290.2152057236181846788. HINFO: read udp 10.244.0.3:46802->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2649357507348173290.2152057236181846788. HINFO: read udp 10.244.0.3:33949->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2649357507348173290.2152057236181846788. HINFO: read udp 10.244.0.3:56896->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2649357507348173290.2152057236181846788. HINFO: read udp 10.244.0.3:44268->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2649357507348173290.2152057236181846788. HINFO: read udp 10.244.0.3:51570->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2649357507348173290.2152057236181846788. HINFO: read udp 10.244.0.3:58269->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8a74299fffa6] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4594783449494437040.6711115097721988906. HINFO: read udp 10.244.0.3:60045->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4594783449494437040.6711115097721988906. HINFO: read udp 10.244.0.3:35075->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4594783449494437040.6711115097721988906. HINFO: read udp 10.244.0.3:56652->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4594783449494437040.6711115097721988906. HINFO: read udp 10.244.0.3:41166->10.0.2.3:53: i/o timeout
	
	
	==> coredns [c5cce48f8f39] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6802623047030515589.1402348946009514436. HINFO: read udp 10.244.0.2:36188->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6802623047030515589.1402348946009514436. HINFO: read udp 10.244.0.2:37643->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6802623047030515589.1402348946009514436. HINFO: read udp 10.244.0.2:37631->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6802623047030515589.1402348946009514436. HINFO: read udp 10.244.0.2:60769->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-796000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-796000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cb9e6220ecbd737c1d09ad9630c6f144f437664a
	                    minikube.k8s.io/name=running-upgrade-796000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_25T12_23_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 25 Sep 2024 19:23:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-796000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 25 Sep 2024 19:27:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 25 Sep 2024 19:23:41 +0000   Wed, 25 Sep 2024 19:23:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 25 Sep 2024 19:23:41 +0000   Wed, 25 Sep 2024 19:23:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 25 Sep 2024 19:23:41 +0000   Wed, 25 Sep 2024 19:23:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 25 Sep 2024 19:23:41 +0000   Wed, 25 Sep 2024 19:23:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-796000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 7cbadb2ca85f43a09086b668bae41168
	  System UUID:                7cbadb2ca85f43a09086b668bae41168
	  Boot ID:                    a687a499-1571-4edc-915b-748f85c8d3f0
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-2xq7h                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m7s
	  kube-system                 coredns-6d4b75cb6d-hl9df                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m7s
	  kube-system                 etcd-running-upgrade-796000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m22s
	  kube-system                 kube-apiserver-running-upgrade-796000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-controller-manager-running-upgrade-796000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-proxy-7z6v2                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-scheduler-running-upgrade-796000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  Starting                 4m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m26s (x4 over 4m26s)  kubelet          Node running-upgrade-796000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x3 over 4m26s)  kubelet          Node running-upgrade-796000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x3 over 4m26s)  kubelet          Node running-upgrade-796000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s                  kubelet          Node running-upgrade-796000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s                  kubelet          Node running-upgrade-796000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s                  kubelet          Node running-upgrade-796000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m21s                  kubelet          Node running-upgrade-796000 status is now: NodeReady
	  Normal  RegisteredNode           4m8s                   node-controller  Node running-upgrade-796000 event: Registered Node running-upgrade-796000 in Controller
	
	
	==> dmesg <==
	[  +1.606324] systemd-fstab-generator[876]: Ignoring "noauto" for root device
	[  +0.063647] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.066609] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.129706] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.074691] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.079063] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +2.797210] systemd-fstab-generator[1285]: Ignoring "noauto" for root device
	[  +9.144099] systemd-fstab-generator[1928]: Ignoring "noauto" for root device
	[  +2.511590] systemd-fstab-generator[2203]: Ignoring "noauto" for root device
	[  +0.129981] systemd-fstab-generator[2239]: Ignoring "noauto" for root device
	[  +0.084131] systemd-fstab-generator[2251]: Ignoring "noauto" for root device
	[  +0.081353] systemd-fstab-generator[2266]: Ignoring "noauto" for root device
	[  +1.635121] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.150767] systemd-fstab-generator[2690]: Ignoring "noauto" for root device
	[  +0.088283] systemd-fstab-generator[2701]: Ignoring "noauto" for root device
	[  +0.078741] systemd-fstab-generator[2712]: Ignoring "noauto" for root device
	[  +0.081454] systemd-fstab-generator[2728]: Ignoring "noauto" for root device
	[  +2.399450] systemd-fstab-generator[2881]: Ignoring "noauto" for root device
	[  +3.066071] systemd-fstab-generator[3278]: Ignoring "noauto" for root device
	[  +1.534519] systemd-fstab-generator[3580]: Ignoring "noauto" for root device
	[ +16.281550] kauditd_printk_skb: 68 callbacks suppressed
	[Sep25 19:23] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.242227] systemd-fstab-generator[9729]: Ignoring "noauto" for root device
	[  +5.125905] systemd-fstab-generator[10312]: Ignoring "noauto" for root device
	[  +0.477793] systemd-fstab-generator[10464]: Ignoring "noauto" for root device
	
	
	==> etcd [9ea79e8d93b8] <==
	{"level":"info","ts":"2024-09-25T19:23:37.503Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-25T19:23:37.503Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-25T19:23:37.515Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-25T19:23:37.515Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-25T19:23:37.515Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-25T19:23:37.516Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-25T19:23:37.516Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-25T19:23:37.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-25T19:23:37.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-25T19:23:37.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-25T19:23:37.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-25T19:23:37.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-25T19:23:37.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-25T19:23:37.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-25T19:23:37.666Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-25T19:23:37.666Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-25T19:23:37.666Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-25T19:23:37.666Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-25T19:23:37.666Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-796000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-25T19:23:37.666Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-25T19:23:37.667Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-25T19:23:37.670Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-25T19:23:37.670Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-25T19:23:37.678Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-25T19:23:37.678Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:28:02 up 9 min,  0 users,  load average: 0.47, 0.52, 0.31
	Linux running-upgrade-796000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [5cb9b6d95558] <==
	I0925 19:23:39.094592       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0925 19:23:39.095865       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0925 19:23:39.095951       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0925 19:23:39.095962       1 cache.go:39] Caches are synced for autoregister controller
	I0925 19:23:39.096887       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0925 19:23:39.121346       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0925 19:23:39.131757       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0925 19:23:39.817111       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0925 19:23:40.001878       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0925 19:23:40.006821       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0925 19:23:40.006852       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0925 19:23:40.149813       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0925 19:23:40.160592       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0925 19:23:40.255108       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0925 19:23:40.256887       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0925 19:23:40.257280       1 controller.go:611] quota admission added evaluator for: endpoints
	I0925 19:23:40.258565       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0925 19:23:41.137398       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0925 19:23:41.547684       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0925 19:23:41.550896       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0925 19:23:41.555026       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0925 19:23:41.607023       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0925 19:23:55.205958       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0925 19:23:55.604454       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0925 19:23:56.141885       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [a9a507e07152] <==
	I0925 19:23:54.902876       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0925 19:23:54.903076       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0925 19:23:54.903122       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0925 19:23:54.902880       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0925 19:23:54.902907       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0925 19:23:54.902887       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0925 19:23:54.902911       1 shared_informer.go:262] Caches are synced for ephemeral
	I0925 19:23:54.902914       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0925 19:23:54.902884       1 shared_informer.go:262] Caches are synced for GC
	I0925 19:23:54.903427       1 shared_informer.go:262] Caches are synced for expand
	I0925 19:23:54.952910       1 shared_informer.go:262] Caches are synced for attach detach
	I0925 19:23:55.002565       1 shared_informer.go:262] Caches are synced for disruption
	I0925 19:23:55.002574       1 disruption.go:371] Sending events to api server.
	I0925 19:23:55.102942       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0925 19:23:55.105177       1 shared_informer.go:262] Caches are synced for resource quota
	I0925 19:23:55.122204       1 shared_informer.go:262] Caches are synced for resource quota
	I0925 19:23:55.131633       1 shared_informer.go:262] Caches are synced for job
	I0925 19:23:55.154016       1 shared_informer.go:262] Caches are synced for cronjob
	I0925 19:23:55.207778       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0925 19:23:55.521233       1 shared_informer.go:262] Caches are synced for garbage collector
	I0925 19:23:55.553097       1 shared_informer.go:262] Caches are synced for garbage collector
	I0925 19:23:55.553119       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0925 19:23:55.607296       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7z6v2"
	I0925 19:23:55.905710       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-hl9df"
	I0925 19:23:55.909598       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-2xq7h"
	
	
	==> kube-proxy [57558152c4b3] <==
	I0925 19:23:56.129502       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0925 19:23:56.129539       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0925 19:23:56.129551       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0925 19:23:56.139548       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0925 19:23:56.139560       1 server_others.go:206] "Using iptables Proxier"
	I0925 19:23:56.139577       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0925 19:23:56.139752       1 server.go:661] "Version info" version="v1.24.1"
	I0925 19:23:56.139761       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 19:23:56.140174       1 config.go:317] "Starting service config controller"
	I0925 19:23:56.140180       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0925 19:23:56.140192       1 config.go:226] "Starting endpoint slice config controller"
	I0925 19:23:56.140193       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0925 19:23:56.141004       1 config.go:444] "Starting node config controller"
	I0925 19:23:56.141007       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0925 19:23:56.240977       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0925 19:23:56.241007       1 shared_informer.go:262] Caches are synced for service config
	I0925 19:23:56.241120       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [a7a133842232] <==
	W0925 19:23:39.047058       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0925 19:23:39.047061       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0925 19:23:39.047071       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0925 19:23:39.047074       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0925 19:23:39.047083       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 19:23:39.047086       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0925 19:23:39.047096       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 19:23:39.047100       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0925 19:23:39.047119       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 19:23:39.047125       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0925 19:23:39.047147       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0925 19:23:39.047150       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0925 19:23:39.047161       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 19:23:39.047164       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0925 19:23:39.858015       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0925 19:23:39.858077       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0925 19:23:39.955402       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0925 19:23:39.955487       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0925 19:23:39.955839       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0925 19:23:39.955875       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0925 19:23:39.968824       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 19:23:39.968871       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0925 19:23:40.067024       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0925 19:23:40.067158       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0925 19:23:40.444411       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-09-25 19:18:50 UTC, ends at Wed 2024-09-25 19:28:02 UTC. --
	Sep 25 19:23:43 running-upgrade-796000 kubelet[10318]: E0925 19:23:43.384394   10318 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-796000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-796000"
	Sep 25 19:23:43 running-upgrade-796000 kubelet[10318]: E0925 19:23:43.586191   10318 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-796000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-796000"
	Sep 25 19:23:43 running-upgrade-796000 kubelet[10318]: I0925 19:23:43.778072   10318 request.go:601] Waited for 1.141925646s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 25 19:23:43 running-upgrade-796000 kubelet[10318]: E0925 19:23:43.783894   10318 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-796000\" already exists" pod="kube-system/etcd-running-upgrade-796000"
	Sep 25 19:23:54 running-upgrade-796000 kubelet[10318]: I0925 19:23:54.864831   10318 topology_manager.go:200] "Topology Admit Handler"
	Sep 25 19:23:54 running-upgrade-796000 kubelet[10318]: I0925 19:23:54.907727   10318 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 25 19:23:54 running-upgrade-796000 kubelet[10318]: I0925 19:23:54.908004   10318 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 25 19:23:55 running-upgrade-796000 kubelet[10318]: I0925 19:23:55.008486   10318 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2b2aedfc-825e-49b1-8b58-6c8a1a948bf4-tmp\") pod \"storage-provisioner\" (UID: \"2b2aedfc-825e-49b1-8b58-6c8a1a948bf4\") " pod="kube-system/storage-provisioner"
	Sep 25 19:23:55 running-upgrade-796000 kubelet[10318]: I0925 19:23:55.008524   10318 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kp7x\" (UniqueName: \"kubernetes.io/projected/2b2aedfc-825e-49b1-8b58-6c8a1a948bf4-kube-api-access-9kp7x\") pod \"storage-provisioner\" (UID: \"2b2aedfc-825e-49b1-8b58-6c8a1a948bf4\") " pod="kube-system/storage-provisioner"
	Sep 25 19:23:55 running-upgrade-796000 kubelet[10318]: E0925 19:23:55.113238   10318 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 25 19:23:55 running-upgrade-796000 kubelet[10318]: E0925 19:23:55.113258   10318 projected.go:192] Error preparing data for projected volume kube-api-access-9kp7x for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 25 19:23:55 running-upgrade-796000 kubelet[10318]: E0925 19:23:55.113294   10318 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/2b2aedfc-825e-49b1-8b58-6c8a1a948bf4-kube-api-access-9kp7x podName:2b2aedfc-825e-49b1-8b58-6c8a1a948bf4 nodeName:}" failed. No retries permitted until 2024-09-25 19:23:55.613280662 +0000 UTC m=+14.078803391 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9kp7x" (UniqueName: "kubernetes.io/projected/2b2aedfc-825e-49b1-8b58-6c8a1a948bf4-kube-api-access-9kp7x") pod "storage-provisioner" (UID: "2b2aedfc-825e-49b1-8b58-6c8a1a948bf4") : configmap "kube-root-ca.crt" not found
	Sep 25 19:23:55 running-upgrade-796000 kubelet[10318]: I0925 19:23:55.610806   10318 topology_manager.go:200] "Topology Admit Handler"
	Sep 25 19:23:55 running-upgrade-796000 kubelet[10318]: I0925 19:23:55.611594   10318 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b0df1f2-fbd7-4558-bb46-43e5c38eb11a-xtables-lock\") pod \"kube-proxy-7z6v2\" (UID: \"2b0df1f2-fbd7-4558-bb46-43e5c38eb11a\") " pod="kube-system/kube-proxy-7z6v2"
	Sep 25 19:23:55 running-upgrade-796000 kubelet[10318]: I0925 19:23:55.611612   10318 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74pjz\" (UniqueName: \"kubernetes.io/projected/2b0df1f2-fbd7-4558-bb46-43e5c38eb11a-kube-api-access-74pjz\") pod \"kube-proxy-7z6v2\" (UID: \"2b0df1f2-fbd7-4558-bb46-43e5c38eb11a\") " pod="kube-system/kube-proxy-7z6v2"
	Sep 25 19:23:55 running-upgrade-796000 kubelet[10318]: I0925 19:23:55.611629   10318 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2b0df1f2-fbd7-4558-bb46-43e5c38eb11a-kube-proxy\") pod \"kube-proxy-7z6v2\" (UID: \"2b0df1f2-fbd7-4558-bb46-43e5c38eb11a\") " pod="kube-system/kube-proxy-7z6v2"
	Sep 25 19:23:55 running-upgrade-796000 kubelet[10318]: I0925 19:23:55.611638   10318 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b0df1f2-fbd7-4558-bb46-43e5c38eb11a-lib-modules\") pod \"kube-proxy-7z6v2\" (UID: \"2b0df1f2-fbd7-4558-bb46-43e5c38eb11a\") " pod="kube-system/kube-proxy-7z6v2"
	Sep 25 19:23:55 running-upgrade-796000 kubelet[10318]: I0925 19:23:55.913839   10318 topology_manager.go:200] "Topology Admit Handler"
	Sep 25 19:23:55 running-upgrade-796000 kubelet[10318]: I0925 19:23:55.913909   10318 topology_manager.go:200] "Topology Admit Handler"
	Sep 25 19:23:56 running-upgrade-796000 kubelet[10318]: I0925 19:23:56.114144   10318 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwtv8\" (UniqueName: \"kubernetes.io/projected/9a6d06d3-a17c-476a-822f-f976f925e683-kube-api-access-jwtv8\") pod \"coredns-6d4b75cb6d-2xq7h\" (UID: \"9a6d06d3-a17c-476a-822f-f976f925e683\") " pod="kube-system/coredns-6d4b75cb6d-2xq7h"
	Sep 25 19:23:56 running-upgrade-796000 kubelet[10318]: I0925 19:23:56.114178   10318 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3c21727-1d4f-41cd-a901-c488305f9018-config-volume\") pod \"coredns-6d4b75cb6d-hl9df\" (UID: \"b3c21727-1d4f-41cd-a901-c488305f9018\") " pod="kube-system/coredns-6d4b75cb6d-hl9df"
	Sep 25 19:23:56 running-upgrade-796000 kubelet[10318]: I0925 19:23:56.114190   10318 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a6d06d3-a17c-476a-822f-f976f925e683-config-volume\") pod \"coredns-6d4b75cb6d-2xq7h\" (UID: \"9a6d06d3-a17c-476a-822f-f976f925e683\") " pod="kube-system/coredns-6d4b75cb6d-2xq7h"
	Sep 25 19:23:56 running-upgrade-796000 kubelet[10318]: I0925 19:23:56.114201   10318 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qltjk\" (UniqueName: \"kubernetes.io/projected/b3c21727-1d4f-41cd-a901-c488305f9018-kube-api-access-qltjk\") pod \"coredns-6d4b75cb6d-hl9df\" (UID: \"b3c21727-1d4f-41cd-a901-c488305f9018\") " pod="kube-system/coredns-6d4b75cb6d-hl9df"
	Sep 25 19:27:45 running-upgrade-796000 kubelet[10318]: I0925 19:27:45.041057   10318 scope.go:110] "RemoveContainer" containerID="2cf271d59fa5c1ea82ef94a90f92202d60b6e4f019758706794852e96ae4cc9a"
	Sep 25 19:27:45 running-upgrade-796000 kubelet[10318]: I0925 19:27:45.087845   10318 scope.go:110] "RemoveContainer" containerID="578e7ca358901db2dd6f5c3138093e75e35a3c07b4172e5a6432d76bf1164fad"
	
	
	==> storage-provisioner [d3b3239f3636] <==
	I0925 19:23:56.036335       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0925 19:23:56.041558       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0925 19:23:56.041614       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0925 19:23:56.046653       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0925 19:23:56.046757       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-796000_97ea147b-4af8-40b5-89bf-39eba8ab3f33!
	I0925 19:23:56.047804       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e240f559-40c9-4e10-88e0-a98b2e245308", APIVersion:"v1", ResourceVersion:"373", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-796000_97ea147b-4af8-40b5-89bf-39eba8ab3f33 became leader
	I0925 19:23:56.146961       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-796000_97ea147b-4af8-40b5-89bf-39eba8ab3f33!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-796000 -n running-upgrade-796000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-796000 -n running-upgrade-796000: exit status 2 (15.57137575s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-796000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-796000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-796000
--- FAIL: TestRunningBinaryUpgrade (606.29s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-378000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-378000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.835153625s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-378000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-378000" primary control-plane node in "kubernetes-upgrade-378000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-378000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:21:12.375052    4946 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:21:12.375201    4946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:21:12.375205    4946 out.go:358] Setting ErrFile to fd 2...
	I0925 12:21:12.375207    4946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:21:12.375318    4946 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:21:12.376509    4946 out.go:352] Setting JSON to false
	I0925 12:21:12.394426    4946 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4843,"bootTime":1727287229,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:21:12.394501    4946 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:21:12.401188    4946 out.go:177] * [kubernetes-upgrade-378000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:21:12.409348    4946 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:21:12.409394    4946 notify.go:220] Checking for updates...
	I0925 12:21:12.416273    4946 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:21:12.419287    4946 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:21:12.422184    4946 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:21:12.425280    4946 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:21:12.428352    4946 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:21:12.433705    4946 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:21:12.433781    4946 config.go:182] Loaded profile config "running-upgrade-796000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:21:12.433834    4946 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:21:12.438296    4946 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:21:12.445152    4946 start.go:297] selected driver: qemu2
	I0925 12:21:12.445157    4946 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:21:12.445162    4946 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:21:12.447345    4946 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:21:12.450263    4946 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:21:12.453368    4946 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0925 12:21:12.453382    4946 cni.go:84] Creating CNI manager for ""
	I0925 12:21:12.453402    4946 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 12:21:12.453437    4946 start.go:340] cluster config:
	{Name:kubernetes-upgrade-378000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-378000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:21:12.456930    4946 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:21:12.464266    4946 out.go:177] * Starting "kubernetes-upgrade-378000" primary control-plane node in "kubernetes-upgrade-378000" cluster
	I0925 12:21:12.468322    4946 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0925 12:21:12.468337    4946 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0925 12:21:12.468345    4946 cache.go:56] Caching tarball of preloaded images
	I0925 12:21:12.468407    4946 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:21:12.468413    4946 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0925 12:21:12.468476    4946 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/kubernetes-upgrade-378000/config.json ...
	I0925 12:21:12.468495    4946 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/kubernetes-upgrade-378000/config.json: {Name:mk228d6b11d9a2ed9875eaa75e3041f8f6fd3d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:21:12.468827    4946 start.go:360] acquireMachinesLock for kubernetes-upgrade-378000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:21:12.468859    4946 start.go:364] duration metric: took 24.417µs to acquireMachinesLock for "kubernetes-upgrade-378000"
	I0925 12:21:12.468870    4946 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-378000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-378000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:21:12.468895    4946 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:21:12.472306    4946 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 12:21:12.487600    4946 start.go:159] libmachine.API.Create for "kubernetes-upgrade-378000" (driver="qemu2")
	I0925 12:21:12.487628    4946 client.go:168] LocalClient.Create starting
	I0925 12:21:12.487685    4946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:21:12.487717    4946 main.go:141] libmachine: Decoding PEM data...
	I0925 12:21:12.487727    4946 main.go:141] libmachine: Parsing certificate...
	I0925 12:21:12.487761    4946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:21:12.487783    4946 main.go:141] libmachine: Decoding PEM data...
	I0925 12:21:12.487790    4946 main.go:141] libmachine: Parsing certificate...
	I0925 12:21:12.488171    4946 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:21:12.659223    4946 main.go:141] libmachine: Creating SSH key...
	I0925 12:21:12.764984    4946 main.go:141] libmachine: Creating Disk image...
	I0925 12:21:12.764991    4946 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:21:12.766004    4946 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/disk.qcow2
	I0925 12:21:12.775408    4946 main.go:141] libmachine: STDOUT: 
	I0925 12:21:12.775428    4946 main.go:141] libmachine: STDERR: 
	I0925 12:21:12.775488    4946 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/disk.qcow2 +20000M
	I0925 12:21:12.783334    4946 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:21:12.783351    4946 main.go:141] libmachine: STDERR: 
	I0925 12:21:12.783365    4946 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/disk.qcow2
	I0925 12:21:12.783371    4946 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:21:12.783387    4946 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:21:12.783428    4946 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:3a:9f:22:99:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/disk.qcow2
	I0925 12:21:12.785000    4946 main.go:141] libmachine: STDOUT: 
	I0925 12:21:12.785018    4946 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:21:12.785038    4946 client.go:171] duration metric: took 297.408958ms to LocalClient.Create
	I0925 12:21:14.787194    4946 start.go:128] duration metric: took 2.318310583s to createHost
	I0925 12:21:14.787293    4946 start.go:83] releasing machines lock for "kubernetes-upgrade-378000", held for 2.318466375s
	W0925 12:21:14.787367    4946 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:21:14.794736    4946 out.go:177] * Deleting "kubernetes-upgrade-378000" in qemu2 ...
	W0925 12:21:14.830505    4946 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:21:14.830538    4946 start.go:729] Will try again in 5 seconds ...
	I0925 12:21:19.832615    4946 start.go:360] acquireMachinesLock for kubernetes-upgrade-378000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:21:19.833192    4946 start.go:364] duration metric: took 488.834µs to acquireMachinesLock for "kubernetes-upgrade-378000"
	I0925 12:21:19.833260    4946 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-378000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-378000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:21:19.833546    4946 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:21:19.842101    4946 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 12:21:19.892294    4946 start.go:159] libmachine.API.Create for "kubernetes-upgrade-378000" (driver="qemu2")
	I0925 12:21:19.892376    4946 client.go:168] LocalClient.Create starting
	I0925 12:21:19.892546    4946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:21:19.892631    4946 main.go:141] libmachine: Decoding PEM data...
	I0925 12:21:19.892647    4946 main.go:141] libmachine: Parsing certificate...
	I0925 12:21:19.892720    4946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:21:19.892769    4946 main.go:141] libmachine: Decoding PEM data...
	I0925 12:21:19.892784    4946 main.go:141] libmachine: Parsing certificate...
	I0925 12:21:19.893475    4946 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:21:20.066907    4946 main.go:141] libmachine: Creating SSH key...
	I0925 12:21:20.119557    4946 main.go:141] libmachine: Creating Disk image...
	I0925 12:21:20.119563    4946 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:21:20.119760    4946 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/disk.qcow2
	I0925 12:21:20.128840    4946 main.go:141] libmachine: STDOUT: 
	I0925 12:21:20.128861    4946 main.go:141] libmachine: STDERR: 
	I0925 12:21:20.128921    4946 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/disk.qcow2 +20000M
	I0925 12:21:20.136785    4946 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:21:20.136810    4946 main.go:141] libmachine: STDERR: 
	I0925 12:21:20.136829    4946 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/disk.qcow2
	I0925 12:21:20.136834    4946 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:21:20.136841    4946 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:21:20.136870    4946 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:20:11:5d:80:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/disk.qcow2
	I0925 12:21:20.138523    4946 main.go:141] libmachine: STDOUT: 
	I0925 12:21:20.138544    4946 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:21:20.138556    4946 client.go:171] duration metric: took 246.162667ms to LocalClient.Create
	I0925 12:21:22.140650    4946 start.go:128] duration metric: took 2.307125875s to createHost
	I0925 12:21:22.140693    4946 start.go:83] releasing machines lock for "kubernetes-upgrade-378000", held for 2.307524s
	W0925 12:21:22.140833    4946 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-378000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-378000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:21:22.155235    4946 out.go:201] 
	W0925 12:21:22.159123    4946 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:21:22.159138    4946 out.go:270] * 
	* 
	W0925 12:21:22.159922    4946 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:21:22.171062    4946 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-378000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-378000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-378000: (3.004079792s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-378000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-378000 status --format={{.Host}}: exit status 7 (60.499417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-378000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-378000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.184565834s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-378000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-378000" primary control-plane node in "kubernetes-upgrade-378000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-378000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-378000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:21:25.278353    4981 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:21:25.278507    4981 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:21:25.278510    4981 out.go:358] Setting ErrFile to fd 2...
	I0925 12:21:25.278513    4981 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:21:25.278640    4981 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:21:25.279653    4981 out.go:352] Setting JSON to false
	I0925 12:21:25.295726    4981 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4856,"bootTime":1727287229,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:21:25.295803    4981 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:21:25.300175    4981 out.go:177] * [kubernetes-upgrade-378000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:21:25.307020    4981 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:21:25.307098    4981 notify.go:220] Checking for updates...
	I0925 12:21:25.315026    4981 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:21:25.317989    4981 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:21:25.321989    4981 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:21:25.324983    4981 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:21:25.327996    4981 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:21:25.331281    4981 config.go:182] Loaded profile config "kubernetes-upgrade-378000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0925 12:21:25.331542    4981 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:21:25.335972    4981 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 12:21:25.342986    4981 start.go:297] selected driver: qemu2
	I0925 12:21:25.342992    4981 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-378000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-378000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:21:25.343037    4981 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:21:25.345334    4981 cni.go:84] Creating CNI manager for ""
	I0925 12:21:25.345366    4981 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:21:25.345397    4981 start.go:340] cluster config:
	{Name:kubernetes-upgrade-378000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-378000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:21:25.348704    4981 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:21:25.356963    4981 out.go:177] * Starting "kubernetes-upgrade-378000" primary control-plane node in "kubernetes-upgrade-378000" cluster
	I0925 12:21:25.360867    4981 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:21:25.360881    4981 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:21:25.360892    4981 cache.go:56] Caching tarball of preloaded images
	I0925 12:21:25.360941    4981 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:21:25.360946    4981 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:21:25.360988    4981 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/kubernetes-upgrade-378000/config.json ...
	I0925 12:21:25.361445    4981 start.go:360] acquireMachinesLock for kubernetes-upgrade-378000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:21:25.361470    4981 start.go:364] duration metric: took 19.958µs to acquireMachinesLock for "kubernetes-upgrade-378000"
	I0925 12:21:25.361479    4981 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:21:25.361484    4981 fix.go:54] fixHost starting: 
	I0925 12:21:25.361595    4981 fix.go:112] recreateIfNeeded on kubernetes-upgrade-378000: state=Stopped err=<nil>
	W0925 12:21:25.361602    4981 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:21:25.365066    4981 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-378000" ...
	I0925 12:21:25.372992    4981 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:21:25.373029    4981 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:20:11:5d:80:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/disk.qcow2
	I0925 12:21:25.374828    4981 main.go:141] libmachine: STDOUT: 
	I0925 12:21:25.374843    4981 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:21:25.374869    4981 fix.go:56] duration metric: took 13.384542ms for fixHost
	I0925 12:21:25.374873    4981 start.go:83] releasing machines lock for "kubernetes-upgrade-378000", held for 13.399333ms
	W0925 12:21:25.374879    4981 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:21:25.374904    4981 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:21:25.374907    4981 start.go:729] Will try again in 5 seconds ...
	I0925 12:21:30.377107    4981 start.go:360] acquireMachinesLock for kubernetes-upgrade-378000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:21:30.377612    4981 start.go:364] duration metric: took 422.208µs to acquireMachinesLock for "kubernetes-upgrade-378000"
	I0925 12:21:30.377772    4981 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:21:30.377793    4981 fix.go:54] fixHost starting: 
	I0925 12:21:30.378533    4981 fix.go:112] recreateIfNeeded on kubernetes-upgrade-378000: state=Stopped err=<nil>
	W0925 12:21:30.378561    4981 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:21:30.387020    4981 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-378000" ...
	I0925 12:21:30.390948    4981 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:21:30.391118    4981 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:20:11:5d:80:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubernetes-upgrade-378000/disk.qcow2
	I0925 12:21:30.400417    4981 main.go:141] libmachine: STDOUT: 
	I0925 12:21:30.400473    4981 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:21:30.400552    4981 fix.go:56] duration metric: took 22.761625ms for fixHost
	I0925 12:21:30.400599    4981 start.go:83] releasing machines lock for "kubernetes-upgrade-378000", held for 22.935292ms
	W0925 12:21:30.400748    4981 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-378000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-378000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:21:30.405282    4981 out.go:201] 
	W0925 12:21:30.408991    4981 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:21:30.409008    4981 out.go:270] * 
	* 
	W0925 12:21:30.411741    4981 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:21:30.420950    4981 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-378000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-378000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-378000 version --output=json: exit status 1 (58.590875ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-378000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-25 12:21:30.492783 -0700 PDT m=+3166.586637543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-378000 -n kubernetes-upgrade-378000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-378000 -n kubernetes-upgrade-378000: exit status 7 (33.047584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-378000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-378000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-378000
--- FAIL: TestKubernetesUpgrade (18.26s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.35s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19681
- KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1853649846/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.35s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.03s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19681
- KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2381071967/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (575.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2944095029 start -p stopped-upgrade-814000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2944095029 start -p stopped-upgrade-814000 --memory=2200 --vm-driver=qemu2 : (40.904074542s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2944095029 -p stopped-upgrade-814000 stop
E0925 12:22:22.586151    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2944095029 -p stopped-upgrade-814000 stop: (12.126162833s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-814000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0925 12:22:38.795488    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
E0925 12:25:25.670457    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
E0925 12:27:22.582149    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
E0925 12:27:38.790103    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-814000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.467540625s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-814000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-814000" primary control-plane node in "stopped-upgrade-814000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-814000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:22:24.792093    5014 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:22:24.792240    5014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:22:24.792244    5014 out.go:358] Setting ErrFile to fd 2...
	I0925 12:22:24.792247    5014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:22:24.792406    5014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:22:24.793543    5014 out.go:352] Setting JSON to false
	I0925 12:22:24.812242    5014 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4915,"bootTime":1727287229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:22:24.812318    5014 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:22:24.817777    5014 out.go:177] * [stopped-upgrade-814000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:22:24.825707    5014 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:22:24.825792    5014 notify.go:220] Checking for updates...
	I0925 12:22:24.832802    5014 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:22:24.835754    5014 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:22:24.839753    5014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:22:24.842690    5014 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:22:24.845760    5014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:22:24.849073    5014 config.go:182] Loaded profile config "stopped-upgrade-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:22:24.852705    5014 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0925 12:22:24.855713    5014 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:22:24.859742    5014 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 12:22:24.866714    5014 start.go:297] selected driver: qemu2
	I0925 12:22:24.866722    5014 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50513 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0925 12:22:24.866767    5014 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:22:24.869477    5014 cni.go:84] Creating CNI manager for ""
	I0925 12:22:24.869512    5014 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:22:24.869537    5014 start.go:340] cluster config:
	{Name:stopped-upgrade-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50513 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0925 12:22:24.869605    5014 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:22:24.875779    5014 out.go:177] * Starting "stopped-upgrade-814000" primary control-plane node in "stopped-upgrade-814000" cluster
	I0925 12:22:24.879736    5014 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0925 12:22:24.879751    5014 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0925 12:22:24.879759    5014 cache.go:56] Caching tarball of preloaded images
	I0925 12:22:24.879817    5014 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:22:24.879823    5014 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0925 12:22:24.879880    5014 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/config.json ...
	I0925 12:22:24.880333    5014 start.go:360] acquireMachinesLock for stopped-upgrade-814000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:22:24.880363    5014 start.go:364] duration metric: took 23.541µs to acquireMachinesLock for "stopped-upgrade-814000"
	I0925 12:22:24.880373    5014 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:22:24.880379    5014 fix.go:54] fixHost starting: 
	I0925 12:22:24.880497    5014 fix.go:112] recreateIfNeeded on stopped-upgrade-814000: state=Stopped err=<nil>
	W0925 12:22:24.880505    5014 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:22:24.884739    5014 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-814000" ...
	I0925 12:22:24.892729    5014 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:22:24.892807    5014 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50480-:22,hostfwd=tcp::50481-:2376,hostname=stopped-upgrade-814000 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/disk.qcow2
	I0925 12:22:24.939237    5014 main.go:141] libmachine: STDOUT: 
	I0925 12:22:24.939268    5014 main.go:141] libmachine: STDERR: 
	I0925 12:22:24.939275    5014 main.go:141] libmachine: Waiting for VM to start (ssh -p 50480 docker@127.0.0.1)...
	I0925 12:22:44.998183    5014 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/config.json ...
	I0925 12:22:44.999113    5014 machine.go:93] provisionDockerMachine start ...
	I0925 12:22:44.999341    5014 main.go:141] libmachine: Using SSH client type: native
	I0925 12:22:44.999837    5014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bd1c00] 0x102bd4440 <nil>  [] 0s} localhost 50480 <nil> <nil>}
	I0925 12:22:44.999851    5014 main.go:141] libmachine: About to run SSH command:
	hostname
	I0925 12:22:45.101847    5014 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0925 12:22:45.101897    5014 buildroot.go:166] provisioning hostname "stopped-upgrade-814000"
	I0925 12:22:45.102076    5014 main.go:141] libmachine: Using SSH client type: native
	I0925 12:22:45.102408    5014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bd1c00] 0x102bd4440 <nil>  [] 0s} localhost 50480 <nil> <nil>}
	I0925 12:22:45.102425    5014 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-814000 && echo "stopped-upgrade-814000" | sudo tee /etc/hostname
	I0925 12:22:45.199541    5014 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-814000
	
	I0925 12:22:45.199652    5014 main.go:141] libmachine: Using SSH client type: native
	I0925 12:22:45.199855    5014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bd1c00] 0x102bd4440 <nil>  [] 0s} localhost 50480 <nil> <nil>}
	I0925 12:22:45.199869    5014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-814000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-814000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-814000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 12:22:45.282196    5014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 12:22:45.282215    5014 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19681-1412/.minikube CaCertPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19681-1412/.minikube}
	I0925 12:22:45.282227    5014 buildroot.go:174] setting up certificates
	I0925 12:22:45.282235    5014 provision.go:84] configureAuth start
	I0925 12:22:45.282244    5014 provision.go:143] copyHostCerts
	I0925 12:22:45.282330    5014 exec_runner.go:144] found /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.pem, removing ...
	I0925 12:22:45.282342    5014 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.pem
	I0925 12:22:45.282472    5014 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.pem (1082 bytes)
	I0925 12:22:45.282697    5014 exec_runner.go:144] found /Users/jenkins/minikube-integration/19681-1412/.minikube/cert.pem, removing ...
	I0925 12:22:45.282703    5014 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19681-1412/.minikube/cert.pem
	I0925 12:22:45.282772    5014 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19681-1412/.minikube/cert.pem (1123 bytes)
	I0925 12:22:45.282915    5014 exec_runner.go:144] found /Users/jenkins/minikube-integration/19681-1412/.minikube/key.pem, removing ...
	I0925 12:22:45.282920    5014 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19681-1412/.minikube/key.pem
	I0925 12:22:45.282987    5014 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19681-1412/.minikube/key.pem (1675 bytes)
	I0925 12:22:45.283102    5014 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-814000 san=[127.0.0.1 localhost minikube stopped-upgrade-814000]
	I0925 12:22:45.406731    5014 provision.go:177] copyRemoteCerts
	I0925 12:22:45.406773    5014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 12:22:45.406781    5014 sshutil.go:53] new ssh client: &{IP:localhost Port:50480 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/id_rsa Username:docker}
	I0925 12:22:45.445878    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0925 12:22:45.453109    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0925 12:22:45.459920    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0925 12:22:45.466446    5014 provision.go:87] duration metric: took 184.204958ms to configureAuth
	I0925 12:22:45.466454    5014 buildroot.go:189] setting minikube options for container-runtime
	I0925 12:22:45.466554    5014 config.go:182] Loaded profile config "stopped-upgrade-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:22:45.466597    5014 main.go:141] libmachine: Using SSH client type: native
	I0925 12:22:45.466687    5014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bd1c00] 0x102bd4440 <nil>  [] 0s} localhost 50480 <nil> <nil>}
	I0925 12:22:45.466691    5014 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 12:22:45.543477    5014 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 12:22:45.543488    5014 buildroot.go:70] root file system type: tmpfs
	I0925 12:22:45.543545    5014 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 12:22:45.543610    5014 main.go:141] libmachine: Using SSH client type: native
	I0925 12:22:45.543731    5014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bd1c00] 0x102bd4440 <nil>  [] 0s} localhost 50480 <nil> <nil>}
	I0925 12:22:45.543766    5014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 12:22:45.621343    5014 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 12:22:45.621405    5014 main.go:141] libmachine: Using SSH client type: native
	I0925 12:22:45.621528    5014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bd1c00] 0x102bd4440 <nil>  [] 0s} localhost 50480 <nil> <nil>}
	I0925 12:22:45.621538    5014 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 12:22:45.995561    5014 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0925 12:22:45.995575    5014 machine.go:96] duration metric: took 996.470709ms to provisionDockerMachine
	I0925 12:22:45.995582    5014 start.go:293] postStartSetup for "stopped-upgrade-814000" (driver="qemu2")
	I0925 12:22:45.995599    5014 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 12:22:45.995664    5014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 12:22:45.995673    5014 sshutil.go:53] new ssh client: &{IP:localhost Port:50480 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/id_rsa Username:docker}
	I0925 12:22:46.036424    5014 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 12:22:46.037757    5014 info.go:137] Remote host: Buildroot 2021.02.12
	I0925 12:22:46.037764    5014 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19681-1412/.minikube/addons for local assets ...
	I0925 12:22:46.037849    5014 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19681-1412/.minikube/files for local assets ...
	I0925 12:22:46.037980    5014 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19681-1412/.minikube/files/etc/ssl/certs/19342.pem -> 19342.pem in /etc/ssl/certs
	I0925 12:22:46.038119    5014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0925 12:22:46.041133    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/files/etc/ssl/certs/19342.pem --> /etc/ssl/certs/19342.pem (1708 bytes)
	I0925 12:22:46.047758    5014 start.go:296] duration metric: took 52.171625ms for postStartSetup
	I0925 12:22:46.047772    5014 fix.go:56] duration metric: took 21.167787792s for fixHost
	I0925 12:22:46.047808    5014 main.go:141] libmachine: Using SSH client type: native
	I0925 12:22:46.047919    5014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102bd1c00] 0x102bd4440 <nil>  [] 0s} localhost 50480 <nil> <nil>}
	I0925 12:22:46.047925    5014 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0925 12:22:46.119744    5014 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727292166.409018338
	
	I0925 12:22:46.119751    5014 fix.go:216] guest clock: 1727292166.409018338
	I0925 12:22:46.119755    5014 fix.go:229] Guest: 2024-09-25 12:22:46.409018338 -0700 PDT Remote: 2024-09-25 12:22:46.047774 -0700 PDT m=+21.284178960 (delta=361.244338ms)
	I0925 12:22:46.119767    5014 fix.go:200] guest clock delta is within tolerance: 361.244338ms
	I0925 12:22:46.119770    5014 start.go:83] releasing machines lock for "stopped-upgrade-814000", held for 21.239796083s
	I0925 12:22:46.119839    5014 ssh_runner.go:195] Run: cat /version.json
	I0925 12:22:46.119853    5014 sshutil.go:53] new ssh client: &{IP:localhost Port:50480 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/id_rsa Username:docker}
	I0925 12:22:46.119839    5014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 12:22:46.119885    5014 sshutil.go:53] new ssh client: &{IP:localhost Port:50480 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/id_rsa Username:docker}
	W0925 12:22:46.120416    5014 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50480: connect: connection refused
	I0925 12:22:46.120438    5014 retry.go:31] will retry after 361.099596ms: dial tcp [::1]:50480: connect: connection refused
	W0925 12:22:46.156470    5014 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0925 12:22:46.156530    5014 ssh_runner.go:195] Run: systemctl --version
	I0925 12:22:46.158460    5014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 12:22:46.160253    5014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 12:22:46.160290    5014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0925 12:22:46.163090    5014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0925 12:22:46.167605    5014 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 12:22:46.167613    5014 start.go:495] detecting cgroup driver to use...
	I0925 12:22:46.167689    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 12:22:46.174696    5014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0925 12:22:46.178073    5014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 12:22:46.181347    5014 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 12:22:46.181376    5014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 12:22:46.184398    5014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 12:22:46.187198    5014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 12:22:46.190300    5014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 12:22:46.193587    5014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 12:22:46.197059    5014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 12:22:46.200703    5014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0925 12:22:46.203782    5014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0925 12:22:46.206820    5014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 12:22:46.209744    5014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 12:22:46.212576    5014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:22:46.295336    5014 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 12:22:46.301298    5014 start.go:495] detecting cgroup driver to use...
	I0925 12:22:46.301377    5014 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 12:22:46.306980    5014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 12:22:46.311681    5014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 12:22:46.318131    5014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 12:22:46.323087    5014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 12:22:46.327764    5014 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0925 12:22:46.368791    5014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 12:22:46.373514    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 12:22:46.378755    5014 ssh_runner.go:195] Run: which cri-dockerd
	I0925 12:22:46.380119    5014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 12:22:46.382659    5014 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0925 12:22:46.387992    5014 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 12:22:46.466026    5014 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 12:22:46.540112    5014 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 12:22:46.540175    5014 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0925 12:22:46.545219    5014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:22:46.623213    5014 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 12:22:47.776622    5014 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.153412792s)
	I0925 12:22:47.776684    5014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0925 12:22:47.783535    5014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0925 12:22:47.788244    5014 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0925 12:22:47.866697    5014 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 12:22:47.937080    5014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:22:48.015618    5014 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0925 12:22:48.021365    5014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0925 12:22:48.025619    5014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:22:48.104893    5014 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0925 12:22:48.142864    5014 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0925 12:22:48.142951    5014 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0925 12:22:48.145930    5014 start.go:563] Will wait 60s for crictl version
	I0925 12:22:48.145992    5014 ssh_runner.go:195] Run: which crictl
	I0925 12:22:48.147452    5014 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 12:22:48.161930    5014 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0925 12:22:48.162021    5014 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 12:22:48.180983    5014 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 12:22:48.202340    5014 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0925 12:22:48.202424    5014 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0925 12:22:48.204026    5014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 12:22:48.207670    5014 kubeadm.go:883] updating cluster {Name:stopped-upgrade-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50513 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0925 12:22:48.207716    5014 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0925 12:22:48.207776    5014 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 12:22:48.219030    5014 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0925 12:22:48.219039    5014 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0925 12:22:48.219099    5014 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 12:22:48.222197    5014 ssh_runner.go:195] Run: which lz4
	I0925 12:22:48.223476    5014 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0925 12:22:48.224788    5014 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0925 12:22:48.224798    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0925 12:22:49.199398    5014 docker.go:649] duration metric: took 975.977417ms to copy over tarball
	I0925 12:22:49.199467    5014 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0925 12:22:50.359251    5014 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.159791416s)
	I0925 12:22:50.359266    5014 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0925 12:22:50.374767    5014 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 12:22:50.378302    5014 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0925 12:22:50.383391    5014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:22:50.463264    5014 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 12:22:52.043980    5014 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.580719375s)
	I0925 12:22:52.044100    5014 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 12:22:52.062708    5014 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0925 12:22:52.062724    5014 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0925 12:22:52.062729    5014 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0925 12:22:52.066967    5014 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:22:52.068484    5014 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0925 12:22:52.070608    5014 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:22:52.070674    5014 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0925 12:22:52.072906    5014 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0925 12:22:52.073005    5014 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0925 12:22:52.074874    5014 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0925 12:22:52.074930    5014 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0925 12:22:52.076083    5014 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0925 12:22:52.076234    5014 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0925 12:22:52.077392    5014 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0925 12:22:52.077393    5014 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0925 12:22:52.078371    5014 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0925 12:22:52.078517    5014 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0925 12:22:52.079297    5014 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0925 12:22:52.080196    5014 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0925 12:22:52.505232    5014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0925 12:22:52.511908    5014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0925 12:22:52.519816    5014 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0925 12:22:52.519847    5014 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0925 12:22:52.519920    5014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0925 12:22:52.523655    5014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0925 12:22:52.524031    5014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0925 12:22:52.544506    5014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0925 12:22:52.549311    5014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0925 12:22:52.557108    5014 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0925 12:22:52.557155    5014 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0925 12:22:52.557157    5014 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0925 12:22:52.557170    5014 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0925 12:22:52.557170    5014 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0925 12:22:52.557195    5014 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0925 12:22:52.557229    5014 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0925 12:22:52.557231    5014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0925 12:22:52.557270    5014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0925 12:22:52.557289    5014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0925 12:22:52.560533    5014 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0925 12:22:52.560556    5014 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0925 12:22:52.560616    5014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0925 12:22:52.581483    5014 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0925 12:22:52.581503    5014 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0925 12:22:52.581565    5014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0925 12:22:52.589104    5014 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0925 12:22:52.589253    5014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0925 12:22:52.592750    5014 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0925 12:22:52.592800    5014 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0925 12:22:52.592815    5014 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0925 12:22:52.592848    5014 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0925 12:22:52.593811    5014 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0925 12:22:52.603098    5014 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0925 12:22:52.603233    5014 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0925 12:22:52.603431    5014 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0925 12:22:52.603449    5014 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0925 12:22:52.603491    5014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0925 12:22:52.604579    5014 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0925 12:22:52.604591    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0925 12:22:52.604617    5014 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0925 12:22:52.604629    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0925 12:22:52.624224    5014 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0925 12:22:52.624238    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0925 12:22:52.634943    5014 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0925 12:22:52.635075    5014 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0925 12:22:52.681528    5014 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0925 12:22:52.681557    5014 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0925 12:22:52.681584    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0925 12:22:52.759330    5014 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0925 12:22:52.759348    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0925 12:22:52.862400    5014 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0925 12:22:52.868513    5014 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0925 12:22:52.868647    5014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:22:52.909259    5014 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0925 12:22:52.909286    5014 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:22:52.909364    5014 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:22:52.939429    5014 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0925 12:22:52.939449    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0925 12:22:52.953418    5014 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0925 12:22:52.953560    5014 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0925 12:22:53.090292    5014 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0925 12:22:53.090311    5014 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0925 12:22:53.090342    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0925 12:22:53.118283    5014 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0925 12:22:53.118308    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0925 12:22:53.359403    5014 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0925 12:22:53.359439    5014 cache_images.go:92] duration metric: took 1.296726834s to LoadCachedImages
	W0925 12:22:53.359475    5014 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0925 12:22:53.359480    5014 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0925 12:22:53.359524    5014 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-814000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0925 12:22:53.359601    5014 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0925 12:22:53.372692    5014 cni.go:84] Creating CNI manager for ""
	I0925 12:22:53.372709    5014 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:22:53.372714    5014 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0925 12:22:53.372727    5014 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-814000 NodeName:stopped-upgrade-814000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 12:22:53.372790    5014 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-814000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 12:22:53.373166    5014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0925 12:22:53.376230    5014 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 12:22:53.376262    5014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 12:22:53.378800    5014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0925 12:22:53.383460    5014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 12:22:53.388090    5014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0925 12:22:53.393673    5014 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0925 12:22:53.394922    5014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 12:22:53.398627    5014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:22:53.478288    5014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0925 12:22:53.488467    5014 certs.go:68] Setting up /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000 for IP: 10.0.2.15
	I0925 12:22:53.488479    5014 certs.go:194] generating shared ca certs ...
	I0925 12:22:53.488488    5014 certs.go:226] acquiring lock for ca certs: {Name:mk58bb807ba332e9ca8b6e9b3a29d33fd7cd9838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:22:53.488671    5014 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.key
	I0925 12:22:53.488721    5014 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/proxy-client-ca.key
	I0925 12:22:53.488732    5014 certs.go:256] generating profile certs ...
	I0925 12:22:53.488811    5014 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/client.key
	I0925 12:22:53.488828    5014 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.key.ea672eda
	I0925 12:22:53.488836    5014 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.crt.ea672eda with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0925 12:22:53.615767    5014 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.crt.ea672eda ...
	I0925 12:22:53.615782    5014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.crt.ea672eda: {Name:mk60c98bb796f71eedc75ba92bb2d1bc236f9239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:22:53.616099    5014 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.key.ea672eda ...
	I0925 12:22:53.616106    5014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.key.ea672eda: {Name:mkb802a7e5feb6dffc6f31ee25ad7e0e4f562c1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:22:53.616633    5014 certs.go:381] copying /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.crt.ea672eda -> /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.crt
	I0925 12:22:53.617195    5014 certs.go:385] copying /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.key.ea672eda -> /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.key
	I0925 12:22:53.617367    5014 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/proxy-client.key
	I0925 12:22:53.617517    5014 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/1934.pem (1338 bytes)
	W0925 12:22:53.617548    5014 certs.go:480] ignoring /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/1934_empty.pem, impossibly tiny 0 bytes
	I0925 12:22:53.617555    5014 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca-key.pem (1679 bytes)
	I0925 12:22:53.617578    5014 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem (1082 bytes)
	I0925 12:22:53.617597    5014 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem (1123 bytes)
	I0925 12:22:53.617616    5014 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/key.pem (1675 bytes)
	I0925 12:22:53.617654    5014 certs.go:484] found cert: /Users/jenkins/minikube-integration/19681-1412/.minikube/files/etc/ssl/certs/19342.pem (1708 bytes)
	I0925 12:22:53.617976    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 12:22:53.625322    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0925 12:22:53.632922    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 12:22:53.639857    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0925 12:22:53.646471    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0925 12:22:53.653808    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0925 12:22:53.661186    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 12:22:53.668051    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0925 12:22:53.674637    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/files/etc/ssl/certs/19342.pem --> /usr/share/ca-certificates/19342.pem (1708 bytes)
	I0925 12:22:53.681619    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 12:22:53.688469    5014 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/1934.pem --> /usr/share/ca-certificates/1934.pem (1338 bytes)
	I0925 12:22:53.695042    5014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 12:22:53.700249    5014 ssh_runner.go:195] Run: openssl version
	I0925 12:22:53.702027    5014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19342.pem && ln -fs /usr/share/ca-certificates/19342.pem /etc/ssl/certs/19342.pem"
	I0925 12:22:53.705275    5014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19342.pem
	I0925 12:22:53.706866    5014 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 25 18:45 /usr/share/ca-certificates/19342.pem
	I0925 12:22:53.706890    5014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19342.pem
	I0925 12:22:53.708733    5014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19342.pem /etc/ssl/certs/3ec20f2e.0"
	I0925 12:22:53.711593    5014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 12:22:53.714646    5014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 12:22:53.716180    5014 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 25 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0925 12:22:53.716204    5014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 12:22:53.717870    5014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 12:22:53.721166    5014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1934.pem && ln -fs /usr/share/ca-certificates/1934.pem /etc/ssl/certs/1934.pem"
	I0925 12:22:53.724077    5014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1934.pem
	I0925 12:22:53.725499    5014 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 25 18:45 /usr/share/ca-certificates/1934.pem
	I0925 12:22:53.725526    5014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1934.pem
	I0925 12:22:53.727434    5014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1934.pem /etc/ssl/certs/51391683.0"
	I0925 12:22:53.730626    5014 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0925 12:22:53.732177    5014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0925 12:22:53.734067    5014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0925 12:22:53.736168    5014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0925 12:22:53.738073    5014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0925 12:22:53.740170    5014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0925 12:22:53.741971    5014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0925 12:22:53.743881    5014 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50513 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0925 12:22:53.743957    5014 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 12:22:53.753913    5014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 12:22:53.757642    5014 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0925 12:22:53.757652    5014 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0925 12:22:53.757683    5014 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0925 12:22:53.761513    5014 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0925 12:22:53.761818    5014 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-814000" does not appear in /Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:22:53.761915    5014 kubeconfig.go:62] /Users/jenkins/minikube-integration/19681-1412/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-814000" cluster setting kubeconfig missing "stopped-upgrade-814000" context setting]
	I0925 12:22:53.762088    5014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/kubeconfig: {Name:mkc011f0309eba8a9546287478e16310d103c97e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:22:53.762529    5014 kapi.go:59] client config for stopped-upgrade-814000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/client.key", CAFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1041aa030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0925 12:22:53.762872    5014 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0925 12:22:53.766082    5014 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-814000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0925 12:22:53.766088    5014 kubeadm.go:1160] stopping kube-system containers ...
	I0925 12:22:53.766137    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 12:22:53.776817    5014 docker.go:483] Stopping containers: [f669dbb60847 85feec2130cf e18b578755b3 da6e61f7285b 68f667927419 7e4f0f83b4c3 59e96c68682d c62ebbe188b2]
	I0925 12:22:53.776904    5014 ssh_runner.go:195] Run: docker stop f669dbb60847 85feec2130cf e18b578755b3 da6e61f7285b 68f667927419 7e4f0f83b4c3 59e96c68682d c62ebbe188b2
	I0925 12:22:53.787427    5014 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0925 12:22:53.793142    5014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 12:22:53.795899    5014 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 12:22:53.795904    5014 kubeadm.go:157] found existing configuration files:
	
	I0925 12:22:53.795931    5014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/admin.conf
	I0925 12:22:53.798348    5014 kubeadm.go:163] "https://control-plane.minikube.internal:50513" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0925 12:22:53.798370    5014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0925 12:22:53.801421    5014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/kubelet.conf
	I0925 12:22:53.804110    5014 kubeadm.go:163] "https://control-plane.minikube.internal:50513" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0925 12:22:53.804136    5014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0925 12:22:53.806540    5014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/controller-manager.conf
	I0925 12:22:53.809638    5014 kubeadm.go:163] "https://control-plane.minikube.internal:50513" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0925 12:22:53.809662    5014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0925 12:22:53.812522    5014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/scheduler.conf
	I0925 12:22:53.814873    5014 kubeadm.go:163] "https://control-plane.minikube.internal:50513" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0925 12:22:53.814903    5014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0925 12:22:53.817944    5014 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 12:22:53.821147    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 12:22:53.845274    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 12:22:54.366828    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0925 12:22:54.494522    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 12:22:54.525385    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0925 12:22:54.552839    5014 api_server.go:52] waiting for apiserver process to appear ...
	I0925 12:22:54.552925    5014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 12:22:55.053032    5014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 12:22:55.553609    5014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 12:22:55.557870    5014 api_server.go:72] duration metric: took 1.005054083s to wait for apiserver process to appear ...
	I0925 12:22:55.557880    5014 api_server.go:88] waiting for apiserver healthz status ...
	I0925 12:22:55.557890    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:00.559991    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:00.560143    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:05.560826    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:05.560866    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:10.561607    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:10.561675    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:15.561930    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:15.561961    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:20.562704    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:20.562751    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:25.563929    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:25.563979    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:30.564533    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:30.564559    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:35.565987    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:35.566008    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:40.568035    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:40.568079    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:45.570359    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:45.570401    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:50.572035    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:50.572085    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:23:55.574266    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:23:55.574394    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:23:55.585179    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:23:55.585264    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:23:55.597849    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:23:55.597935    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:23:55.609075    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:23:55.609155    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:23:55.620042    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:23:55.620141    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:23:55.632339    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:23:55.632432    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:23:55.645885    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:23:55.645970    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:23:55.655946    5014 logs.go:276] 0 containers: []
	W0925 12:23:55.655956    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:23:55.656022    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:23:55.666904    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:23:55.666922    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:23:55.666928    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:23:55.686057    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:23:55.686073    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:23:55.697499    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:23:55.697510    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:23:55.711873    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:23:55.711887    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:23:55.725253    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:23:55.725266    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:23:55.770051    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:23:55.770067    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:23:55.785161    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:23:55.785172    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:23:55.801267    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:23:55.801300    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:23:55.840927    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:23:55.840944    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:23:55.845543    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:23:55.845555    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:23:55.864268    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:23:55.864282    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:23:55.877071    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:23:55.877084    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:23:55.903009    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:23:55.903035    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:23:55.915645    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:23:55.915661    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:23:56.002069    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:23:56.002081    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:23:56.026919    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:23:56.026937    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:23:56.039332    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:23:56.039343    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:23:58.558836    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:03.559538    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:03.559828    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:03.584070    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:24:03.584193    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:03.602838    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:24:03.602934    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:03.616669    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:24:03.616761    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:03.627894    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:24:03.627990    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:03.637937    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:24:03.638019    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:03.648505    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:24:03.648573    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:03.659157    5014 logs.go:276] 0 containers: []
	W0925 12:24:03.659170    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:03.659246    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:03.670369    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:24:03.670385    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:24:03.670391    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:24:03.685107    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:24:03.685117    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:24:03.696314    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:24:03.696326    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:24:03.717732    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:24:03.717742    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:24:03.729286    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:03.729299    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:24:03.755113    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:03.755123    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:24:03.793695    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:24:03.793702    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:24:03.807498    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:24:03.807509    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:24:03.822560    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:03.822570    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:03.826732    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:24:03.826741    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:24:03.842459    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:24:03.842471    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:24:03.855561    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:24:03.855570    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:24:03.867713    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:24:03.867724    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:24:03.906571    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:24:03.906580    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:24:03.920275    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:24:03.920290    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:24:03.931786    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:24:03.931796    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:03.943309    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:03.943318    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:06.483085    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:11.484210    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:11.484679    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:11.514836    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:24:11.514990    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:11.537494    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:24:11.537588    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:11.551137    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:24:11.551233    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:11.563168    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:24:11.563257    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:11.573860    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:24:11.573949    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:11.584760    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:24:11.584844    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:11.597282    5014 logs.go:276] 0 containers: []
	W0925 12:24:11.597295    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:11.597374    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:11.608187    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:24:11.608208    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:11.608214    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:11.612925    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:24:11.612932    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:24:11.624765    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:11.624776    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:24:11.649987    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:24:11.649995    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:24:11.667269    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:24:11.667280    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:24:11.679302    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:24:11.679317    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:24:11.691319    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:24:11.691331    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:24:11.705469    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:24:11.705483    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:24:11.743308    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:24:11.743321    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:24:11.757808    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:24:11.757822    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:24:11.772223    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:24:11.772237    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:24:11.784068    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:11.784079    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:24:11.822770    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:11.822778    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:11.856953    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:24:11.856964    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:24:11.871099    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:24:11.871111    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:24:11.888607    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:24:11.888617    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:11.900334    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:24:11.900346    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:24:14.415047    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:19.415930    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:19.416047    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:19.426801    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:24:19.426885    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:19.437990    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:24:19.438086    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:19.448310    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:24:19.448391    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:19.458806    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:24:19.458895    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:19.469562    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:24:19.469644    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:19.481464    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:24:19.481546    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:19.491642    5014 logs.go:276] 0 containers: []
	W0925 12:24:19.491659    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:19.491732    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:19.502567    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:24:19.502590    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:19.502595    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:19.506982    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:19.506991    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:19.546677    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:24:19.546687    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:19.559278    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:24:19.559294    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:24:19.598214    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:24:19.598230    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:24:19.609727    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:24:19.609739    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:24:19.621549    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:24:19.621563    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:24:19.636308    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:24:19.636319    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:24:19.648184    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:24:19.648196    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:24:19.659415    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:24:19.659425    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:24:19.673649    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:24:19.673660    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:24:19.687277    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:24:19.687287    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:24:19.700824    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:24:19.700834    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:24:19.718887    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:19.718898    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:24:19.757915    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:24:19.757926    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:24:19.777682    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:24:19.777696    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:24:19.790388    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:19.790400    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:24:22.316554    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:27.318842    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:27.319247    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:27.358793    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:24:27.358960    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:27.381025    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:24:27.381179    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:27.397053    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:24:27.397152    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:27.411078    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:24:27.411163    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:27.426405    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:24:27.426490    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:27.437607    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:24:27.437684    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:27.447603    5014 logs.go:276] 0 containers: []
	W0925 12:24:27.447621    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:27.447708    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:27.458646    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:24:27.458665    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:27.458671    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:27.463537    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:27.463546    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:27.499128    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:24:27.499138    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:24:27.511355    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:24:27.511367    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:24:27.526711    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:24:27.526726    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:24:27.540485    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:24:27.540498    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:24:27.554032    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:27.554042    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:24:27.579074    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:27.579082    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:24:27.617935    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:24:27.617946    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:24:27.632346    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:24:27.632356    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:24:27.649997    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:24:27.650013    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:24:27.687218    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:24:27.687229    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:24:27.701149    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:24:27.701162    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:24:27.712982    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:24:27.712995    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:24:27.724366    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:24:27.724376    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:24:27.738309    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:24:27.738322    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:24:27.749642    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:24:27.749655    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:30.263418    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:35.265636    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:35.265890    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:35.286400    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:24:35.286533    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:35.301021    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:24:35.301114    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:35.313305    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:24:35.313390    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:35.328593    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:24:35.328674    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:35.339562    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:24:35.339644    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:35.350745    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:24:35.350814    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:35.361314    5014 logs.go:276] 0 containers: []
	W0925 12:24:35.361326    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:35.361395    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:35.371678    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:24:35.371696    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:35.371701    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:24:35.396840    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:24:35.396852    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:24:35.440011    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:24:35.440022    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:24:35.452257    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:24:35.452267    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:24:35.466267    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:24:35.466278    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:24:35.477605    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:24:35.477616    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:24:35.491403    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:24:35.491413    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:24:35.505195    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:24:35.505205    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:24:35.516912    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:24:35.516921    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:24:35.528205    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:24:35.528216    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:24:35.539329    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:24:35.539344    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:35.551486    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:35.551502    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:24:35.590533    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:24:35.590543    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:24:35.608497    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:24:35.608508    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:24:35.620325    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:24:35.620335    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:24:35.639168    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:35.639178    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:35.643933    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:35.643941    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:38.180537    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:43.182780    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:43.183000    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:43.202579    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:24:43.202689    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:43.216575    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:24:43.216670    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:43.234895    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:24:43.234981    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:43.245160    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:24:43.245251    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:43.255573    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:24:43.255651    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:43.266749    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:24:43.266829    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:43.277317    5014 logs.go:276] 0 containers: []
	W0925 12:24:43.277330    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:43.277399    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:43.287501    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:24:43.287519    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:24:43.287524    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:24:43.305744    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:24:43.305758    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:43.318333    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:43.318347    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:24:43.356328    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:43.356335    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:43.393270    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:24:43.393288    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:24:43.407399    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:24:43.407413    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:24:43.455143    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:24:43.455154    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:24:43.467208    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:24:43.467218    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:24:43.481773    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:43.481785    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:24:43.505025    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:43.505033    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:43.508883    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:24:43.508892    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:24:43.524204    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:24:43.524215    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:24:43.537756    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:24:43.537767    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:24:43.552972    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:24:43.552984    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:24:43.564950    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:24:43.564963    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:24:43.576920    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:24:43.576930    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:24:43.597685    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:24:43.597697    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:24:46.112369    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:51.114750    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:51.115168    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:51.148160    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:24:51.148315    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:51.171086    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:24:51.171183    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:51.184341    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:24:51.184433    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:51.202525    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:24:51.202608    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:51.213630    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:24:51.213718    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:51.224181    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:24:51.224260    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:51.234149    5014 logs.go:276] 0 containers: []
	W0925 12:24:51.234159    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:51.234225    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:51.244851    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:24:51.244868    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:24:51.244873    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:24:51.282697    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:24:51.282708    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:24:51.299109    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:24:51.299120    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:24:51.311039    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:51.311055    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:51.315597    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:51.315606    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:51.351843    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:24:51.351859    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:24:51.367084    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:24:51.367094    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:51.380225    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:51.380239    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:24:51.421097    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:24:51.421109    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:24:51.432000    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:24:51.432015    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:24:51.446990    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:24:51.447004    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:24:51.458344    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:24:51.458356    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:24:51.472126    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:24:51.472136    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:24:51.484125    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:24:51.484137    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:24:51.501610    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:24:51.501626    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:24:51.516512    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:24:51.516525    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:24:51.528278    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:51.528293    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:24:54.053680    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:24:59.055941    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:24:59.056335    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:24:59.085892    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:24:59.086049    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:24:59.105271    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:24:59.105392    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:24:59.119535    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:24:59.119614    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:24:59.131531    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:24:59.131611    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:24:59.142194    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:24:59.142299    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:24:59.153121    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:24:59.153196    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:24:59.163901    5014 logs.go:276] 0 containers: []
	W0925 12:24:59.163915    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:24:59.163987    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:24:59.174484    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:24:59.174503    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:24:59.174508    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:24:59.188939    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:24:59.188954    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:24:59.203688    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:24:59.203700    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:24:59.218797    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:24:59.218808    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:24:59.230629    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:24:59.230641    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:24:59.243189    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:24:59.243202    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:24:59.281999    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:24:59.282011    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:24:59.295652    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:24:59.295665    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:24:59.307167    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:24:59.307182    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:24:59.311295    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:24:59.311303    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:24:59.345850    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:24:59.345860    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:24:59.383583    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:24:59.383597    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:24:59.398439    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:24:59.398455    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:24:59.410275    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:24:59.410288    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:24:59.428120    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:24:59.428130    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:24:59.440220    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:24:59.440230    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:24:59.452175    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:24:59.452189    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:01.975811    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:06.978329    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:06.978530    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:06.998508    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:25:06.998611    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:07.012513    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:25:07.012606    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:07.023862    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:25:07.023947    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:07.034359    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:25:07.034434    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:07.045244    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:25:07.045327    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:07.055344    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:25:07.055431    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:07.069599    5014 logs.go:276] 0 containers: []
	W0925 12:25:07.069612    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:07.069686    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:07.080905    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:25:07.080922    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:25:07.080927    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:25:07.092111    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:25:07.092121    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:07.103843    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:07.103853    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:07.108100    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:25:07.108106    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:25:07.122657    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:25:07.122670    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:25:07.134710    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:25:07.134724    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:25:07.147116    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:25:07.147126    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:25:07.158765    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:25:07.158786    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:25:07.176336    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:25:07.176352    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:25:07.188559    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:07.188572    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:25:07.224465    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:25:07.224473    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:25:07.238404    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:25:07.238417    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:25:07.281162    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:25:07.281173    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:25:07.294869    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:25:07.294880    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:25:07.306388    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:07.306398    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:07.331514    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:07.331521    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:07.366374    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:25:07.366385    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:25:09.886601    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:14.888816    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:14.889021    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:14.906654    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:25:14.906756    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:14.918570    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:25:14.918640    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:14.929169    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:25:14.929249    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:14.941217    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:25:14.941297    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:14.951661    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:25:14.951743    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:14.962874    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:25:14.962951    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:14.973420    5014 logs.go:276] 0 containers: []
	W0925 12:25:14.973435    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:14.973506    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:14.983921    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:25:14.983940    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:14.983945    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:15.019169    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:25:15.019185    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:25:15.036727    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:25:15.036737    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:25:15.049681    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:25:15.049691    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:25:15.086912    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:25:15.086925    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:25:15.098799    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:25:15.098810    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:25:15.110201    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:15.110210    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:15.133869    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:15.133876    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:25:15.173293    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:15.173304    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:15.177555    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:25:15.177561    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:25:15.190798    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:25:15.190808    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:25:15.205090    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:25:15.205100    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:25:15.216839    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:25:15.216851    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:25:15.228007    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:25:15.228017    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:25:15.242021    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:25:15.242032    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:25:15.272126    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:25:15.272141    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:25:15.287922    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:25:15.287933    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:17.801929    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:22.804208    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:22.804530    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:22.827632    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:25:22.827760    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:22.843876    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:25:22.843973    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:22.857062    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:25:22.857146    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:22.868493    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:25:22.868584    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:22.880620    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:25:22.880702    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:22.891020    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:25:22.891104    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:22.900793    5014 logs.go:276] 0 containers: []
	W0925 12:25:22.900805    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:22.900872    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:22.910872    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:25:22.910890    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:25:22.910895    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:25:22.925083    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:25:22.925094    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:25:22.940262    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:25:22.940275    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:25:22.959695    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:25:22.959712    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:22.972264    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:25:22.972276    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:25:22.984746    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:22.984757    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:22.988683    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:25:22.988689    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:25:23.027872    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:25:23.027889    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:25:23.040094    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:25:23.040111    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:25:23.052421    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:23.052432    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:23.075427    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:23.075436    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:25:23.111659    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:25:23.111666    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:25:23.126249    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:25:23.126264    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:25:23.142491    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:25:23.142502    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:25:23.154076    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:23.154089    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:23.190152    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:25:23.190164    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:25:23.204486    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:25:23.204496    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:25:25.718253    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:30.720482    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:30.720642    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:30.737129    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:25:30.737236    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:30.749699    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:25:30.749786    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:30.760632    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:25:30.760703    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:30.771798    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:25:30.771884    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:30.781947    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:25:30.782034    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:30.792189    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:25:30.792268    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:30.803191    5014 logs.go:276] 0 containers: []
	W0925 12:25:30.803200    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:30.803263    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:30.813580    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:25:30.813598    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:25:30.813603    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:25:30.824584    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:25:30.824596    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:25:30.836011    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:25:30.836021    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:25:30.849053    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:30.849063    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:25:30.887645    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:25:30.887661    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:25:30.904046    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:25:30.904059    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:25:30.921672    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:25:30.921684    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:25:30.934221    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:25:30.934231    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:30.946185    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:30.946196    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:30.981678    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:25:30.981694    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:25:30.998517    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:25:30.998528    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:25:31.016816    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:31.016832    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:31.020741    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:25:31.020746    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:25:31.033995    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:25:31.034004    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:25:31.045532    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:31.045541    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:31.068668    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:25:31.068675    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:25:31.106044    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:25:31.106056    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:25:33.622737    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:38.625032    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:38.625375    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:38.670082    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:25:38.670224    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:38.690328    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:25:38.690422    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:38.702515    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:25:38.702602    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:38.714848    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:25:38.714922    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:38.725357    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:25:38.725441    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:38.735862    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:25:38.735939    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:38.746061    5014 logs.go:276] 0 containers: []
	W0925 12:25:38.746074    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:38.746153    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:38.757268    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:25:38.757287    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:25:38.757292    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:25:38.769078    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:25:38.769090    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:25:38.785060    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:38.785070    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:38.821942    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:25:38.821950    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:25:38.836083    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:25:38.836092    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:25:38.847536    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:25:38.847546    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:25:38.859655    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:25:38.859665    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:25:38.877294    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:25:38.877306    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:25:38.891471    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:25:38.891481    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:25:38.906977    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:25:38.906993    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:38.919009    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:38.919019    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:38.923352    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:25:38.923361    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:25:38.937605    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:25:38.937614    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:25:38.953645    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:25:38.953659    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:25:38.965481    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:38.965492    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:38.989800    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:38.989810    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:25:39.026854    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:25:39.026866    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:25:41.572509    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:46.574872    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:46.575490    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:46.613717    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:25:46.613874    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:46.634224    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:25:46.634346    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:46.648344    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:25:46.648424    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:46.660858    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:25:46.660951    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:46.671640    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:25:46.671732    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:46.685731    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:25:46.685815    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:46.696108    5014 logs.go:276] 0 containers: []
	W0925 12:25:46.696129    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:46.696199    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:46.706762    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:25:46.706780    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:46.706785    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:25:46.744102    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:25:46.744123    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:25:46.782709    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:25:46.782725    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:25:46.795018    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:25:46.795031    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:25:46.807410    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:25:46.807424    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:25:46.819193    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:46.819204    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:46.854227    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:25:46.854241    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:25:46.868379    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:25:46.868394    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:25:46.882837    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:25:46.882849    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:25:46.894466    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:46.894477    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:46.917207    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:25:46.917214    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:25:46.931277    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:25:46.931292    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:25:46.946527    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:25:46.946540    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:25:46.961487    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:46.961502    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:46.966705    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:25:46.966711    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:25:46.984468    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:25:46.984483    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:25:46.997575    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:25:46.997588    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:49.511286    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:25:54.513533    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:25:54.513706    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:25:54.528917    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:25:54.529020    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:25:54.542010    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:25:54.542098    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:25:54.552070    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:25:54.552142    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:25:54.562985    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:25:54.563078    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:25:54.573640    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:25:54.573722    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:25:54.584518    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:25:54.584597    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:25:54.595066    5014 logs.go:276] 0 containers: []
	W0925 12:25:54.595084    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:25:54.595162    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:25:54.605904    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:25:54.605926    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:25:54.605931    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:25:54.618779    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:25:54.618790    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:25:54.630683    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:25:54.630694    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:25:54.668085    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:25:54.668095    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:25:54.672144    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:25:54.672153    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:25:54.706527    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:25:54.706541    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:25:54.718558    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:25:54.718569    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:25:54.732012    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:25:54.732027    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:25:54.747276    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:25:54.747284    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:25:54.762225    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:25:54.762236    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:25:54.774811    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:25:54.774822    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:25:54.818500    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:25:54.818514    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:25:54.830406    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:25:54.830419    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:25:54.854745    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:25:54.854756    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:25:54.868886    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:25:54.868901    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:25:54.880228    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:25:54.880240    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:25:54.897957    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:25:54.897968    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:25:57.410492    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:02.412785    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:02.413092    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:02.444835    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:26:02.444947    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:02.459093    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:26:02.459191    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:02.473560    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:26:02.473644    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:02.483681    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:26:02.483752    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:02.493639    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:26:02.493724    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:02.512688    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:26:02.512773    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:02.524062    5014 logs.go:276] 0 containers: []
	W0925 12:26:02.524073    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:02.524138    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:02.534725    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:26:02.534746    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:26:02.534751    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:26:02.555347    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:26:02.555355    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:26:02.566938    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:26:02.566948    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:26:02.579603    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:26:02.579616    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:02.591459    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:02.591470    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:26:02.627991    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:26:02.628002    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:26:02.649411    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:26:02.649422    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:26:02.693716    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:26:02.693727    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:26:02.707754    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:26:02.707764    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:26:02.723203    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:26:02.723217    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:26:02.734699    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:26:02.734710    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:26:02.746198    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:26:02.746208    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:26:02.757766    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:02.757780    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:02.762611    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:26:02.762616    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:26:02.781058    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:26:02.781068    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:26:02.792294    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:02.792306    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:02.817454    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:02.817463    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:05.354509    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:10.356671    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:10.356880    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:10.375273    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:26:10.375390    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:10.389420    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:26:10.389522    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:10.401555    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:26:10.401636    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:10.413789    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:26:10.413865    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:10.424978    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:26:10.425055    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:10.435220    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:26:10.435303    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:10.445458    5014 logs.go:276] 0 containers: []
	W0925 12:26:10.445474    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:10.445553    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:10.456354    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:26:10.456372    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:26:10.456378    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:26:10.470822    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:26:10.470833    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:26:10.482749    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:26:10.482759    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:26:10.500243    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:26:10.500254    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:26:10.538127    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:26:10.538139    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:26:10.552294    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:26:10.552306    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:26:10.563452    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:26:10.563463    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:26:10.578445    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:10.578459    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:26:10.617396    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:10.617406    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:10.622009    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:26:10.622016    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:26:10.636619    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:26:10.636628    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:26:10.649575    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:10.649585    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:10.673983    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:26:10.673991    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:10.685708    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:10.685719    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:10.719277    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:26:10.719289    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:26:10.730921    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:26:10.730934    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:26:10.745992    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:26:10.746002    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:26:13.262098    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:18.264342    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:18.264548    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:18.291450    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:26:18.291550    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:18.306395    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:26:18.306481    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:18.317398    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:26:18.317486    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:18.328250    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:26:18.328342    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:18.338491    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:26:18.338604    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:18.348882    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:26:18.348952    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:18.359703    5014 logs.go:276] 0 containers: []
	W0925 12:26:18.359717    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:18.359790    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:18.371225    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:26:18.371242    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:26:18.371248    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:26:18.394168    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:18.394186    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:18.419064    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:18.419076    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:26:18.457834    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:18.457843    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:18.494422    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:26:18.494435    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:26:18.531861    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:26:18.531872    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:26:18.547297    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:26:18.547308    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:26:18.558647    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:26:18.558660    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:18.572428    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:18.572443    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:18.576732    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:26:18.576739    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:26:18.595387    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:26:18.595402    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:26:18.609781    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:26:18.609791    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:26:18.624774    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:26:18.624784    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:26:18.636284    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:26:18.636293    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:26:18.648896    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:26:18.648910    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:26:18.661663    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:26:18.661676    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:26:18.676617    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:26:18.676629    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:26:21.190052    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:26.192402    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:26.192740    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:26.220110    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:26:26.220252    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:26.237597    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:26:26.237706    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:26.249997    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:26:26.250072    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:26.260565    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:26:26.260634    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:26.270897    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:26:26.270982    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:26.281087    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:26:26.281173    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:26.290846    5014 logs.go:276] 0 containers: []
	W0925 12:26:26.290861    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:26.290929    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:26.301104    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:26:26.301123    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:26:26.301128    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:26:26.316170    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:26:26.316180    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:26:26.327851    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:26:26.327860    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:26:26.343205    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:26:26.343214    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:26:26.354327    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:26.354338    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:26.388588    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:26:26.388600    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:26:26.403708    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:26:26.403719    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:26:26.417327    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:26:26.417339    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:26:26.434742    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:26:26.434752    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:26.447769    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:26.447781    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:26:26.485131    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:26:26.485150    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:26:26.499011    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:26:26.499021    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:26:26.517313    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:26:26.517327    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:26:26.529159    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:26.529175    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:26.553181    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:26.553195    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:26.557983    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:26:26.557989    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:26:26.596073    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:26:26.596089    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:26:29.109687    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:34.112069    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:34.112676    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:34.152209    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:26:34.152369    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:34.173873    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:26:34.173994    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:34.189126    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:26:34.189221    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:34.202085    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:26:34.202169    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:34.212672    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:26:34.212757    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:34.223588    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:26:34.223670    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:34.233693    5014 logs.go:276] 0 containers: []
	W0925 12:26:34.233708    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:34.233768    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:34.244356    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:26:34.244374    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:34.244380    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:34.279630    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:26:34.279639    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:26:34.317151    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:34.317163    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:26:34.356009    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:26:34.356018    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:26:34.370236    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:26:34.370251    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:26:34.382362    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:26:34.382378    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:26:34.398101    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:34.398113    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:34.402590    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:26:34.402597    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:26:34.421942    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:26:34.421953    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:26:34.440831    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:26:34.440841    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:26:34.456573    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:26:34.456583    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:26:34.468413    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:26:34.468421    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:26:34.487625    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:26:34.487636    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:26:34.499262    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:26:34.499272    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:26:34.516663    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:26:34.516674    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:26:34.529263    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:34.529273    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:34.551806    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:26:34.551814    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:37.064739    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:42.065930    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:42.066404    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:42.099165    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:26:42.099318    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:42.122860    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:26:42.122957    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:42.136027    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:26:42.136112    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:42.147566    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:26:42.147652    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:42.158009    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:26:42.158123    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:42.169908    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:26:42.169998    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:42.180859    5014 logs.go:276] 0 containers: []
	W0925 12:26:42.180870    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:42.180940    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:42.191695    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:26:42.191711    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:42.191716    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:26:42.227937    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:26:42.227945    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:26:42.238917    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:26:42.238929    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:26:42.261475    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:26:42.261485    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:26:42.273827    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:42.273837    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:42.278246    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:26:42.278253    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:26:42.291982    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:26:42.291992    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:26:42.303107    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:42.303121    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:42.336591    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:26:42.336607    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:26:42.351576    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:26:42.351589    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:26:42.363766    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:26:42.363775    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:26:42.380572    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:42.380583    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:42.404149    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:26:42.404159    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:26:42.417944    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:26:42.417954    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:26:42.456001    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:26:42.456018    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:26:42.480604    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:26:42.480620    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:26:42.492939    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:26:42.492953    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:45.006571    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:50.008792    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:50.009349    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:26:50.048219    5014 logs.go:276] 2 containers: [b6573931253b f669dbb60847]
	I0925 12:26:50.048370    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:26:50.066778    5014 logs.go:276] 2 containers: [feab1feb03cd da6e61f7285b]
	I0925 12:26:50.066891    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:26:50.085486    5014 logs.go:276] 1 containers: [2f0aaed59dac]
	I0925 12:26:50.085584    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:26:50.097304    5014 logs.go:276] 2 containers: [2c3cabee5fd3 85feec2130cf]
	I0925 12:26:50.097394    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:26:50.107948    5014 logs.go:276] 1 containers: [21c4b57e502b]
	I0925 12:26:50.108027    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:26:50.118812    5014 logs.go:276] 2 containers: [73e4b5e11a81 68f667927419]
	I0925 12:26:50.118885    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:26:50.129292    5014 logs.go:276] 0 containers: []
	W0925 12:26:50.129303    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:26:50.129366    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:26:50.140172    5014 logs.go:276] 2 containers: [493874a3420d e6d79e49f1e6]
	I0925 12:26:50.140191    5014 logs.go:123] Gathering logs for kube-controller-manager [73e4b5e11a81] ...
	I0925 12:26:50.140197    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73e4b5e11a81"
	I0925 12:26:50.158106    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:26:50.158119    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:26:50.179786    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:26:50.179803    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:26:50.184086    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:26:50.184094    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:26:50.233715    5014 logs.go:123] Gathering logs for kube-scheduler [2c3cabee5fd3] ...
	I0925 12:26:50.233727    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3cabee5fd3"
	I0925 12:26:50.249274    5014 logs.go:123] Gathering logs for kube-scheduler [85feec2130cf] ...
	I0925 12:26:50.249285    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85feec2130cf"
	I0925 12:26:50.266104    5014 logs.go:123] Gathering logs for kube-proxy [21c4b57e502b] ...
	I0925 12:26:50.266115    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c4b57e502b"
	I0925 12:26:50.278425    5014 logs.go:123] Gathering logs for kube-controller-manager [68f667927419] ...
	I0925 12:26:50.278439    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f667927419"
	I0925 12:26:50.291192    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:26:50.291202    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:26:50.302999    5014 logs.go:123] Gathering logs for storage-provisioner [493874a3420d] ...
	I0925 12:26:50.303012    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493874a3420d"
	I0925 12:26:50.314319    5014 logs.go:123] Gathering logs for storage-provisioner [e6d79e49f1e6] ...
	I0925 12:26:50.314327    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6d79e49f1e6"
	I0925 12:26:50.325789    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:26:50.325800    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:26:50.364304    5014 logs.go:123] Gathering logs for kube-apiserver [b6573931253b] ...
	I0925 12:26:50.364312    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6573931253b"
	I0925 12:26:50.378832    5014 logs.go:123] Gathering logs for etcd [feab1feb03cd] ...
	I0925 12:26:50.378843    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feab1feb03cd"
	I0925 12:26:50.392915    5014 logs.go:123] Gathering logs for etcd [da6e61f7285b] ...
	I0925 12:26:50.392931    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da6e61f7285b"
	I0925 12:26:50.408062    5014 logs.go:123] Gathering logs for coredns [2f0aaed59dac] ...
	I0925 12:26:50.408071    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0aaed59dac"
	I0925 12:26:50.419097    5014 logs.go:123] Gathering logs for kube-apiserver [f669dbb60847] ...
	I0925 12:26:50.419106    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f669dbb60847"
	I0925 12:26:52.959039    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:26:57.961741    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:26:57.961889    5014 kubeadm.go:597] duration metric: took 4m4.208761333s to restartPrimaryControlPlane
	W0925 12:26:57.962012    5014 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0925 12:26:57.962072    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0925 12:26:58.965487    5014 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.003419708s)
	I0925 12:26:58.965570    5014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 12:26:58.971054    5014 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 12:26:58.973922    5014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 12:26:58.976995    5014 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 12:26:58.977003    5014 kubeadm.go:157] found existing configuration files:
	
	I0925 12:26:58.977036    5014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/admin.conf
	I0925 12:26:58.980222    5014 kubeadm.go:163] "https://control-plane.minikube.internal:50513" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0925 12:26:58.980252    5014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0925 12:26:58.982938    5014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/kubelet.conf
	I0925 12:26:58.985374    5014 kubeadm.go:163] "https://control-plane.minikube.internal:50513" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0925 12:26:58.985406    5014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0925 12:26:58.988538    5014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/controller-manager.conf
	I0925 12:26:58.991848    5014 kubeadm.go:163] "https://control-plane.minikube.internal:50513" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0925 12:26:58.991876    5014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0925 12:26:58.994552    5014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/scheduler.conf
	I0925 12:26:58.997265    5014 kubeadm.go:163] "https://control-plane.minikube.internal:50513" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50513 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0925 12:26:58.997293    5014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0925 12:26:59.000657    5014 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0925 12:26:59.018457    5014 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0925 12:26:59.018492    5014 kubeadm.go:310] [preflight] Running pre-flight checks
	I0925 12:26:59.067477    5014 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 12:26:59.067531    5014 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 12:26:59.067576    5014 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 12:26:59.121979    5014 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 12:26:59.126133    5014 out.go:235]   - Generating certificates and keys ...
	I0925 12:26:59.126167    5014 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0925 12:26:59.126220    5014 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0925 12:26:59.126261    5014 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0925 12:26:59.126295    5014 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0925 12:26:59.126352    5014 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0925 12:26:59.126385    5014 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0925 12:26:59.126438    5014 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0925 12:26:59.126476    5014 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0925 12:26:59.126515    5014 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0925 12:26:59.126617    5014 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0925 12:26:59.126637    5014 kubeadm.go:310] [certs] Using the existing "sa" key
	I0925 12:26:59.126666    5014 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 12:26:59.382754    5014 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 12:26:59.547647    5014 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 12:26:59.741669    5014 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 12:26:59.857810    5014 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 12:26:59.886046    5014 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 12:26:59.886870    5014 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 12:26:59.886896    5014 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0925 12:26:59.981729    5014 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 12:26:59.984844    5014 out.go:235]   - Booting up control plane ...
	I0925 12:26:59.984894    5014 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 12:26:59.984942    5014 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 12:26:59.984988    5014 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 12:26:59.985029    5014 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 12:26:59.985133    5014 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 12:27:04.483858    5014 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502184 seconds
	I0925 12:27:04.483931    5014 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 12:27:04.488888    5014 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 12:27:05.010355    5014 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 12:27:05.010677    5014 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-814000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0925 12:27:05.514299    5014 kubeadm.go:310] [bootstrap-token] Using token: h640qa.l1d0pjuhrwb7q9j2
	I0925 12:27:05.517879    5014 out.go:235]   - Configuring RBAC rules ...
	I0925 12:27:05.517940    5014 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 12:27:05.525686    5014 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 12:27:05.527608    5014 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 12:27:05.528390    5014 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 12:27:05.529217    5014 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 12:27:05.530069    5014 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 12:27:05.532779    5014 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 12:27:05.706169    5014 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0925 12:27:05.927499    5014 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0925 12:27:05.928067    5014 kubeadm.go:310] 
	I0925 12:27:05.928101    5014 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0925 12:27:05.928106    5014 kubeadm.go:310] 
	I0925 12:27:05.928153    5014 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0925 12:27:05.928201    5014 kubeadm.go:310] 
	I0925 12:27:05.928252    5014 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0925 12:27:05.928281    5014 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 12:27:05.928303    5014 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 12:27:05.928306    5014 kubeadm.go:310] 
	I0925 12:27:05.928348    5014 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0925 12:27:05.928351    5014 kubeadm.go:310] 
	I0925 12:27:05.928375    5014 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0925 12:27:05.928382    5014 kubeadm.go:310] 
	I0925 12:27:05.928445    5014 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0925 12:27:05.928480    5014 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 12:27:05.928525    5014 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 12:27:05.928529    5014 kubeadm.go:310] 
	I0925 12:27:05.928567    5014 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 12:27:05.928625    5014 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0925 12:27:05.928630    5014 kubeadm.go:310] 
	I0925 12:27:05.928673    5014 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token h640qa.l1d0pjuhrwb7q9j2 \
	I0925 12:27:05.928728    5014 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e51346daa4df67057de8045209492e1d5416aabfe1ee2597d0ef678584899cc1 \
	I0925 12:27:05.928745    5014 kubeadm.go:310] 	--control-plane 
	I0925 12:27:05.928748    5014 kubeadm.go:310] 
	I0925 12:27:05.928800    5014 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0925 12:27:05.928809    5014 kubeadm.go:310] 
	I0925 12:27:05.928848    5014 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token h640qa.l1d0pjuhrwb7q9j2 \
	I0925 12:27:05.928908    5014 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e51346daa4df67057de8045209492e1d5416aabfe1ee2597d0ef678584899cc1 
	I0925 12:27:05.928994    5014 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 12:27:05.929004    5014 cni.go:84] Creating CNI manager for ""
	I0925 12:27:05.929013    5014 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:27:05.935030    5014 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 12:27:05.944252    5014 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 12:27:05.947435    5014 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0925 12:27:05.952092    5014 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 12:27:05.952139    5014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-814000 minikube.k8s.io/updated_at=2024_09_25T12_27_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=cb9e6220ecbd737c1d09ad9630c6f144f437664a minikube.k8s.io/name=stopped-upgrade-814000 minikube.k8s.io/primary=true
	I0925 12:27:05.952140    5014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 12:27:05.996174    5014 kubeadm.go:1113] duration metric: took 44.073625ms to wait for elevateKubeSystemPrivileges
	I0925 12:27:05.996192    5014 ops.go:34] apiserver oom_adj: -16
	I0925 12:27:05.996204    5014 kubeadm.go:394] duration metric: took 4m12.257006667s to StartCluster
	I0925 12:27:05.996214    5014 settings.go:142] acquiring lock: {Name:mk3a21ccfd977fa63a309ae265edad20537229ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:27:05.996304    5014 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:27:05.996739    5014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/kubeconfig: {Name:mkc011f0309eba8a9546287478e16310d103c97e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:27:05.996936    5014 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:27:05.996999    5014 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0925 12:27:05.997032    5014 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-814000"
	I0925 12:27:05.997043    5014 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-814000"
	I0925 12:27:05.997043    5014 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-814000"
	W0925 12:27:05.997047    5014 addons.go:243] addon storage-provisioner should already be in state true
	I0925 12:27:05.997051    5014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-814000"
	I0925 12:27:05.997059    5014 host.go:66] Checking if "stopped-upgrade-814000" exists ...
	I0925 12:27:05.997076    5014 config.go:182] Loaded profile config "stopped-upgrade-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:27:05.998076    5014 kapi.go:59] client config for stopped-upgrade-814000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/stopped-upgrade-814000/client.key", CAFile:"/Users/jenkins/minikube-integration/19681-1412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1041aa030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0925 12:27:05.998193    5014 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-814000"
	W0925 12:27:05.998197    5014 addons.go:243] addon default-storageclass should already be in state true
	I0925 12:27:05.998203    5014 host.go:66] Checking if "stopped-upgrade-814000" exists ...
	I0925 12:27:06.000972    5014 out.go:177] * Verifying Kubernetes components...
	I0925 12:27:06.001378    5014 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 12:27:06.005165    5014 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 12:27:06.005174    5014 sshutil.go:53] new ssh client: &{IP:localhost Port:50480 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/id_rsa Username:docker}
	I0925 12:27:06.009011    5014 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 12:27:06.011957    5014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 12:27:06.016071    5014 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 12:27:06.016079    5014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 12:27:06.016087    5014 sshutil.go:53] new ssh client: &{IP:localhost Port:50480 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/stopped-upgrade-814000/id_rsa Username:docker}
	I0925 12:27:06.097894    5014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0925 12:27:06.103165    5014 api_server.go:52] waiting for apiserver process to appear ...
	I0925 12:27:06.103217    5014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 12:27:06.107232    5014 api_server.go:72] duration metric: took 110.2875ms to wait for apiserver process to appear ...
	I0925 12:27:06.107240    5014 api_server.go:88] waiting for apiserver healthz status ...
	I0925 12:27:06.107247    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:06.122958    5014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 12:27:06.186999    5014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 12:27:06.504707    5014 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0925 12:27:06.504719    5014 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0925 12:27:11.108180    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:11.108231    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:16.109152    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:16.109211    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:21.109432    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:21.109481    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:26.109790    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:26.109835    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:31.110196    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:31.110229    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:36.110721    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:36.110755    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0925 12:27:36.506521    5014 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0925 12:27:36.510895    5014 out.go:177] * Enabled addons: storage-provisioner
	I0925 12:27:36.518715    5014 addons.go:510] duration metric: took 30.522328542s for enable addons: enabled=[storage-provisioner]
	I0925 12:27:41.111408    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:41.111457    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:46.112465    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:46.112503    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:51.113662    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:51.113703    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:27:56.115287    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:27:56.115319    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:28:01.116400    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:28:01.116444    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:28:06.118541    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:28:06.118642    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:28:06.142304    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:28:06.142394    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:28:06.153991    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:28:06.154082    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:28:06.168572    5014 logs.go:276] 2 containers: [5516aacf7bec 2165303b7771]
	I0925 12:28:06.168655    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:28:06.179178    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:28:06.179258    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:28:06.189735    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:28:06.189822    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:28:06.200133    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:28:06.200214    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:28:06.210338    5014 logs.go:276] 0 containers: []
	W0925 12:28:06.210350    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:28:06.210418    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:28:06.220449    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:28:06.220468    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:28:06.220474    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:28:06.235863    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:28:06.235873    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:28:06.247541    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:28:06.247552    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:28:06.266135    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:28:06.266150    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:28:06.278336    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:28:06.278345    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:28:06.282613    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:28:06.282621    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:28:06.317545    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:28:06.317558    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:28:06.329574    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:28:06.329588    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:28:06.341770    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:28:06.341782    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:28:06.367583    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:28:06.367595    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:28:06.385257    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:28:06.385274    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:28:06.423612    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:28:06.423623    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:28:06.438209    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:28:06.438218    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:28:08.956009    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:28:13.958419    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:28:13.958680    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:28:13.981510    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:28:13.981632    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:28:14.001108    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:28:14.001206    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:28:14.013878    5014 logs.go:276] 2 containers: [5516aacf7bec 2165303b7771]
	I0925 12:28:14.013961    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:28:14.025506    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:28:14.025590    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:28:14.035958    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:28:14.036039    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:28:14.046462    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:28:14.046541    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:28:14.057166    5014 logs.go:276] 0 containers: []
	W0925 12:28:14.057178    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:28:14.057251    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:28:14.068068    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:28:14.068082    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:28:14.068088    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:28:14.083073    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:28:14.083086    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:28:14.094543    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:28:14.094557    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:28:14.106320    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:28:14.106330    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:28:14.120601    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:28:14.120611    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:28:14.131716    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:28:14.131731    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:28:14.170469    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:28:14.170481    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:28:14.188589    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:28:14.188599    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:28:14.200470    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:28:14.200479    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:28:14.218091    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:28:14.218101    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:28:14.229727    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:28:14.229738    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:28:14.253528    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:28:14.253536    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:28:14.290725    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:28:14.290733    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:28:16.796737    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:28:21.799002    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:28:21.799422    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:28:21.831545    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:28:21.831689    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:28:21.850282    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:28:21.850376    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:28:21.863864    5014 logs.go:276] 2 containers: [5516aacf7bec 2165303b7771]
	I0925 12:28:21.863950    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:28:21.875394    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:28:21.875478    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:28:21.885605    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:28:21.885694    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:28:21.895916    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:28:21.895998    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:28:21.905566    5014 logs.go:276] 0 containers: []
	W0925 12:28:21.905580    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:28:21.905648    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:28:21.916015    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:28:21.916031    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:28:21.916036    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:28:21.927485    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:28:21.927500    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:28:21.942365    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:28:21.942376    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:28:21.960033    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:28:21.960044    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:28:21.973174    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:28:21.973184    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:28:21.984828    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:28:21.984838    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:28:21.996395    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:28:21.996405    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:28:22.007392    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:28:22.007403    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:28:22.045712    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:28:22.045720    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:28:22.050195    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:28:22.050205    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:28:22.085689    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:28:22.085703    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:28:22.100164    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:28:22.100174    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:28:22.125050    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:28:22.125059    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:28:24.638345    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:28:29.641214    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:28:29.641849    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:28:29.690838    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:28:29.690991    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:28:29.713255    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:28:29.713356    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:28:29.727330    5014 logs.go:276] 2 containers: [5516aacf7bec 2165303b7771]
	I0925 12:28:29.727408    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:28:29.738668    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:28:29.738740    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:28:29.749215    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:28:29.749294    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:28:29.759835    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:28:29.759903    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:28:29.773162    5014 logs.go:276] 0 containers: []
	W0925 12:28:29.773178    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:28:29.773244    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:28:29.783990    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:28:29.784006    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:28:29.784012    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:28:29.800983    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:28:29.800994    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:28:29.813146    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:28:29.813157    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:28:29.825148    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:28:29.825159    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:28:29.862034    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:28:29.862043    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:28:29.873614    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:28:29.873627    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:28:29.891653    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:28:29.891665    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:28:29.906506    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:28:29.906515    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:28:29.918532    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:28:29.918544    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:28:29.930240    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:28:29.930255    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:28:29.948011    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:28:29.948021    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:28:29.972825    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:28:29.972833    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:28:29.977300    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:28:29.977306    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:28:32.525057    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:28:37.527495    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:28:37.527966    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:28:37.560142    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:28:37.560276    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:28:37.576905    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:28:37.576993    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:28:37.589774    5014 logs.go:276] 2 containers: [5516aacf7bec 2165303b7771]
	I0925 12:28:37.589854    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:28:37.600833    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:28:37.600905    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:28:37.611038    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:28:37.611110    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:28:37.621444    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:28:37.621520    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:28:37.631844    5014 logs.go:276] 0 containers: []
	W0925 12:28:37.631855    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:28:37.631917    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:28:37.642065    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:28:37.642078    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:28:37.642083    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:28:37.653513    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:28:37.653522    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:28:37.692580    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:28:37.692587    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:28:37.697072    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:28:37.697078    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:28:37.732361    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:28:37.732376    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:28:37.751056    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:28:37.751066    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:28:37.765640    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:28:37.765655    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:28:37.777570    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:28:37.777585    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:28:37.797982    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:28:37.797995    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:28:37.822299    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:28:37.822305    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:28:37.833230    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:28:37.833243    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:28:37.844759    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:28:37.844768    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:28:37.859684    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:28:37.859699    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:28:40.373329    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:28:45.376065    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:28:45.376499    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:28:45.407720    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:28:45.407898    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:28:45.428151    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:28:45.428257    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:28:45.443743    5014 logs.go:276] 2 containers: [5516aacf7bec 2165303b7771]
	I0925 12:28:45.443843    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:28:45.455910    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:28:45.456000    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:28:45.466846    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:28:45.466935    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:28:45.476997    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:28:45.477072    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:28:45.487073    5014 logs.go:276] 0 containers: []
	W0925 12:28:45.487084    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:28:45.487156    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:28:45.502078    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:28:45.502095    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:28:45.502101    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:28:45.526785    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:28:45.526795    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:28:45.530799    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:28:45.530805    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:28:45.564813    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:28:45.564824    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:28:45.579452    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:28:45.579463    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:28:45.593127    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:28:45.593137    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:28:45.604442    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:28:45.604453    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:28:45.617722    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:28:45.617732    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:28:45.633174    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:28:45.633184    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:28:45.644845    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:28:45.644854    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:28:45.683256    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:28:45.683263    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:28:45.694612    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:28:45.694621    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:28:45.712696    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:28:45.712708    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:28:48.226028    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:28:53.228816    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:28:53.229351    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:28:53.274188    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:28:53.274332    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:28:53.291761    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:28:53.291864    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:28:53.305143    5014 logs.go:276] 2 containers: [5516aacf7bec 2165303b7771]
	I0925 12:28:53.305230    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:28:53.316887    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:28:53.316967    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:28:53.327712    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:28:53.327790    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:28:53.338303    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:28:53.338386    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:28:53.348679    5014 logs.go:276] 0 containers: []
	W0925 12:28:53.348693    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:28:53.348761    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:28:53.359435    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:28:53.359450    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:28:53.359456    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:28:53.384253    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:28:53.384264    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:28:53.396144    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:28:53.396160    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:28:53.400581    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:28:53.400588    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:28:53.412221    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:28:53.412236    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:28:53.424131    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:28:53.424142    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:28:53.435810    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:28:53.435825    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:28:53.450260    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:28:53.450275    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:28:53.470959    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:28:53.470972    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:28:53.482257    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:28:53.482268    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:28:53.519141    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:28:53.519149    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:28:53.553956    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:28:53.553966    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:28:53.568149    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:28:53.568158    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:28:56.084350    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:29:01.086709    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:29:01.087164    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:29:01.115570    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:29:01.115716    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:29:01.133857    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:29:01.133963    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:29:01.149479    5014 logs.go:276] 2 containers: [5516aacf7bec 2165303b7771]
	I0925 12:29:01.149549    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:29:01.161034    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:29:01.161096    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:29:01.171860    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:29:01.171945    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:29:01.182174    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:29:01.182247    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:29:01.192635    5014 logs.go:276] 0 containers: []
	W0925 12:29:01.192647    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:29:01.192707    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:29:01.203169    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:29:01.203182    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:29:01.203188    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:29:01.221737    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:29:01.221745    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:29:01.233157    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:29:01.233171    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:29:01.258122    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:29:01.258133    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:29:01.269387    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:29:01.269403    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:29:01.283666    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:29:01.283679    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:29:01.295316    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:29:01.295327    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:29:01.329722    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:29:01.329738    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:29:01.349805    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:29:01.349816    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:29:01.361329    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:29:01.361340    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:29:01.373243    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:29:01.373258    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:29:01.390519    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:29:01.390533    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:29:01.428743    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:29:01.428752    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:29:03.935419    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:29:08.937706    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:29:08.938272    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:29:08.973070    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:29:08.973235    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:29:08.998286    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:29:08.998381    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:29:09.011736    5014 logs.go:276] 2 containers: [5516aacf7bec 2165303b7771]
	I0925 12:29:09.011816    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:29:09.023190    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:29:09.023272    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:29:09.038099    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:29:09.038172    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:29:09.048449    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:29:09.048516    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:29:09.063708    5014 logs.go:276] 0 containers: []
	W0925 12:29:09.063720    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:29:09.063787    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:29:09.074337    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:29:09.074352    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:29:09.074356    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:29:09.085676    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:29:09.085689    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:29:09.103223    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:29:09.103232    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:29:09.114385    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:29:09.114394    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:29:09.150666    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:29:09.150674    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:29:09.154847    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:29:09.154856    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:29:09.191016    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:29:09.191029    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:29:09.207092    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:29:09.207101    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:29:09.218545    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:29:09.218555    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:29:09.241239    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:29:09.241245    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:29:09.252434    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:29:09.252450    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:29:09.267004    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:29:09.267013    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:29:09.278339    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:29:09.278354    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:29:11.795915    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:29:16.798268    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:29:16.798485    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:29:16.817640    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:29:16.817745    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:29:16.831608    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:29:16.831702    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:29:16.843703    5014 logs.go:276] 2 containers: [5516aacf7bec 2165303b7771]
	I0925 12:29:16.843783    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:29:16.854150    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:29:16.854230    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:29:16.864655    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:29:16.864739    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:29:16.875123    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:29:16.875201    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:29:16.885374    5014 logs.go:276] 0 containers: []
	W0925 12:29:16.885390    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:29:16.885456    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:29:16.895829    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:29:16.895843    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:29:16.895850    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:29:16.907230    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:29:16.907244    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:29:16.919453    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:29:16.919463    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:29:16.936239    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:29:16.936249    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:29:16.959523    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:29:16.959530    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:29:16.971351    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:29:16.971360    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:29:17.009453    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:29:17.009460    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:29:17.013842    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:29:17.013850    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:29:17.048145    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:29:17.048160    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:29:17.062958    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:29:17.062970    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:29:17.074462    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:29:17.074473    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:29:17.088570    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:29:17.088579    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:29:17.102330    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:29:17.102341    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:29:19.616290    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:29:24.617609    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:29:24.618111    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:29:24.656138    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:29:24.656306    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:29:24.676796    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:29:24.676927    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:29:24.692537    5014 logs.go:276] 4 containers: [52d9a945531c 88e1d5f8332c 5516aacf7bec 2165303b7771]
	I0925 12:29:24.692627    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:29:24.705164    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:29:24.705233    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:29:24.715805    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:29:24.715868    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:29:24.726353    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:29:24.726435    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:29:24.736799    5014 logs.go:276] 0 containers: []
	W0925 12:29:24.736816    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:29:24.736882    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:29:24.748128    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:29:24.748144    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:29:24.748149    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:29:24.759777    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:29:24.759786    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:29:24.773877    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:29:24.773888    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:29:24.798098    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:29:24.798105    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:29:24.833826    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:29:24.833832    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:29:24.868325    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:29:24.868337    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:29:24.890300    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:29:24.890310    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:29:24.901795    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:29:24.901807    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:29:24.906010    5014 logs.go:123] Gathering logs for coredns [52d9a945531c] ...
	I0925 12:29:24.906017    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d9a945531c"
	I0925 12:29:24.917438    5014 logs.go:123] Gathering logs for coredns [88e1d5f8332c] ...
	I0925 12:29:24.917448    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88e1d5f8332c"
	I0925 12:29:24.929098    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:29:24.929107    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:29:24.944028    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:29:24.944039    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:29:24.960599    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:29:24.960609    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:29:24.977258    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:29:24.977267    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:29:24.988981    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:29:24.988996    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:29:27.509313    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:29:32.511929    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:29:32.512375    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:29:32.547960    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:29:32.548103    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:29:32.568623    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:29:32.568757    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:29:32.583388    5014 logs.go:276] 4 containers: [52d9a945531c 88e1d5f8332c 5516aacf7bec 2165303b7771]
	I0925 12:29:32.583482    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:29:32.594974    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:29:32.595059    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:29:32.606104    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:29:32.606176    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:29:32.616550    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:29:32.616629    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:29:32.627356    5014 logs.go:276] 0 containers: []
	W0925 12:29:32.627368    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:29:32.627443    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:29:32.637648    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:29:32.637667    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:29:32.637673    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:29:32.641914    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:29:32.641923    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:29:32.655351    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:29:32.655366    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:29:32.666880    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:29:32.666891    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:29:32.704785    5014 logs.go:123] Gathering logs for coredns [88e1d5f8332c] ...
	I0925 12:29:32.704792    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88e1d5f8332c"
	I0925 12:29:32.716137    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:29:32.716148    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:29:32.728431    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:29:32.728441    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:29:32.743211    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:29:32.743221    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:29:32.757657    5014 logs.go:123] Gathering logs for coredns [52d9a945531c] ...
	I0925 12:29:32.757668    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d9a945531c"
	I0925 12:29:32.770811    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:29:32.770824    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:29:32.782415    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:29:32.782425    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:29:32.816193    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:29:32.816208    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:29:32.830770    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:29:32.830780    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:29:32.842577    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:29:32.842588    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:29:32.860401    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:29:32.860410    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:29:35.388274    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:29:40.390902    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:29:40.390985    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:29:40.403176    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:29:40.403274    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:29:40.414954    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:29:40.415013    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:29:40.425704    5014 logs.go:276] 4 containers: [52d9a945531c 88e1d5f8332c 5516aacf7bec 2165303b7771]
	I0925 12:29:40.425778    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:29:40.437172    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:29:40.437252    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:29:40.449233    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:29:40.449299    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:29:40.460310    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:29:40.460374    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:29:40.470770    5014 logs.go:276] 0 containers: []
	W0925 12:29:40.470784    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:29:40.470851    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:29:40.483669    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:29:40.483683    5014 logs.go:123] Gathering logs for coredns [88e1d5f8332c] ...
	I0925 12:29:40.483691    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88e1d5f8332c"
	I0925 12:29:40.497302    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:29:40.497314    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:29:40.510118    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:29:40.510126    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:29:40.522393    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:29:40.522404    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:29:40.545919    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:29:40.545928    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:29:40.563868    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:29:40.563881    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:29:40.578637    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:29:40.578649    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:29:40.615888    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:29:40.615900    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:29:40.631506    5014 logs.go:123] Gathering logs for coredns [52d9a945531c] ...
	I0925 12:29:40.631519    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d9a945531c"
	I0925 12:29:40.643796    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:29:40.643807    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:29:40.655789    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:29:40.655803    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:29:40.682558    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:29:40.682575    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:29:40.722124    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:29:40.722145    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:29:40.727343    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:29:40.727352    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:29:40.746865    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:29:40.746881    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:29:43.261365    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:29:48.264050    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:29:48.264644    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:29:48.303283    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:29:48.303452    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:29:48.325489    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:29:48.325619    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:29:48.340654    5014 logs.go:276] 4 containers: [52d9a945531c 88e1d5f8332c 5516aacf7bec 2165303b7771]
	I0925 12:29:48.340749    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:29:48.352973    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:29:48.353054    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:29:48.364054    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:29:48.364121    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:29:48.383778    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:29:48.383857    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:29:48.395622    5014 logs.go:276] 0 containers: []
	W0925 12:29:48.395633    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:29:48.395706    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:29:48.416972    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:29:48.416990    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:29:48.416997    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:29:48.453453    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:29:48.453468    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:29:48.468009    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:29:48.468021    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:29:48.479930    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:29:48.479943    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:29:48.497923    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:29:48.497934    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:29:48.509940    5014 logs.go:123] Gathering logs for coredns [88e1d5f8332c] ...
	I0925 12:29:48.509953    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88e1d5f8332c"
	I0925 12:29:48.521592    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:29:48.521608    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:29:48.532916    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:29:48.532929    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:29:48.558000    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:29:48.558009    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:29:48.561892    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:29:48.561900    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:29:48.573181    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:29:48.573191    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:29:48.611210    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:29:48.611216    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:29:48.625491    5014 logs.go:123] Gathering logs for coredns [52d9a945531c] ...
	I0925 12:29:48.625502    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d9a945531c"
	I0925 12:29:48.637414    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:29:48.637427    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:29:48.654740    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:29:48.654751    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:29:51.169400    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:29:56.171784    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:29:56.172041    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:29:56.193570    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:29:56.193674    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:29:56.207438    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:29:56.207512    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:29:56.219852    5014 logs.go:276] 4 containers: [52d9a945531c 88e1d5f8332c 5516aacf7bec 2165303b7771]
	I0925 12:29:56.219936    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:29:56.230191    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:29:56.230272    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:29:56.242672    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:29:56.242753    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:29:56.253421    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:29:56.253495    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:29:56.262959    5014 logs.go:276] 0 containers: []
	W0925 12:29:56.262971    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:29:56.263038    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:29:56.277084    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:29:56.277100    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:29:56.277105    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:29:56.288613    5014 logs.go:123] Gathering logs for coredns [52d9a945531c] ...
	I0925 12:29:56.288622    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d9a945531c"
	I0925 12:29:56.300813    5014 logs.go:123] Gathering logs for coredns [88e1d5f8332c] ...
	I0925 12:29:56.300827    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88e1d5f8332c"
	I0925 12:29:56.316491    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:29:56.316499    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:29:56.328098    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:29:56.328108    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:29:56.347215    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:29:56.347223    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:29:56.351967    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:29:56.351975    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:29:56.373723    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:29:56.373736    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:29:56.385483    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:29:56.385494    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:29:56.396920    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:29:56.396934    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:29:56.436047    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:29:56.436056    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:29:56.450371    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:29:56.450381    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:29:56.464260    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:29:56.464273    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:29:56.489740    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:29:56.489750    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:29:56.500827    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:29:56.500839    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:29:59.040245    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:30:04.041433    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:30:04.041518    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:30:04.053221    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:30:04.053313    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:30:04.065557    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:30:04.065644    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:30:04.077914    5014 logs.go:276] 4 containers: [52d9a945531c 88e1d5f8332c 5516aacf7bec 2165303b7771]
	I0925 12:30:04.078003    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:30:04.089314    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:30:04.089416    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:30:04.100921    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:30:04.100979    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:30:04.112140    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:30:04.112227    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:30:04.123730    5014 logs.go:276] 0 containers: []
	W0925 12:30:04.123742    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:30:04.123809    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:30:04.135673    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:30:04.135691    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:30:04.135696    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:30:04.175830    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:30:04.175844    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:30:04.213419    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:30:04.213428    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:30:04.226696    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:30:04.226707    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:30:04.239788    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:30:04.239800    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:30:04.251893    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:30:04.251903    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:30:04.256137    5014 logs.go:123] Gathering logs for coredns [52d9a945531c] ...
	I0925 12:30:04.256144    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d9a945531c"
	I0925 12:30:04.268687    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:30:04.268700    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:30:04.282207    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:30:04.282218    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:30:04.294762    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:30:04.294771    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:30:04.319626    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:30:04.319641    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:30:04.336134    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:30:04.336144    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:30:04.369655    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:30:04.369668    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:30:04.390319    5014 logs.go:123] Gathering logs for coredns [88e1d5f8332c] ...
	I0925 12:30:04.390335    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88e1d5f8332c"
	I0925 12:30:04.409742    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:30:04.409755    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:30:06.931119    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:30:11.933970    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:30:11.934593    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:30:11.970501    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:30:11.970668    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:30:11.991223    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:30:11.991358    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:30:12.006977    5014 logs.go:276] 4 containers: [52d9a945531c 88e1d5f8332c 5516aacf7bec 2165303b7771]
	I0925 12:30:12.007068    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:30:12.019373    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:30:12.019454    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:30:12.030080    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:30:12.030160    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:30:12.045070    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:30:12.045155    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:30:12.056522    5014 logs.go:276] 0 containers: []
	W0925 12:30:12.056533    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:30:12.056596    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:30:12.067692    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:30:12.067711    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:30:12.067717    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:30:12.082139    5014 logs.go:123] Gathering logs for coredns [52d9a945531c] ...
	I0925 12:30:12.082153    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d9a945531c"
	I0925 12:30:12.093819    5014 logs.go:123] Gathering logs for coredns [88e1d5f8332c] ...
	I0925 12:30:12.093832    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88e1d5f8332c"
	I0925 12:30:12.105446    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:30:12.105457    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:30:12.120799    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:30:12.120812    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:30:12.132226    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:30:12.132239    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:30:12.166409    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:30:12.166421    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:30:12.178452    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:30:12.178461    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:30:12.183635    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:30:12.183647    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:30:12.197904    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:30:12.197915    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:30:12.210449    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:30:12.210461    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:30:12.222140    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:30:12.222151    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:30:12.257997    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:30:12.258005    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:30:12.275001    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:30:12.275011    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:30:12.287479    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:30:12.287490    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:30:14.814040    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:30:19.816587    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:30:19.816823    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:30:19.840500    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:30:19.840639    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:30:19.857176    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:30:19.857279    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:30:19.870133    5014 logs.go:276] 4 containers: [52d9a945531c 88e1d5f8332c 5516aacf7bec 2165303b7771]
	I0925 12:30:19.870211    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:30:19.881297    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:30:19.881361    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:30:19.891706    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:30:19.891779    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:30:19.902742    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:30:19.902825    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:30:19.912773    5014 logs.go:276] 0 containers: []
	W0925 12:30:19.912784    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:30:19.912843    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:30:19.922741    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:30:19.922759    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:30:19.922765    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:30:19.937232    5014 logs.go:123] Gathering logs for coredns [88e1d5f8332c] ...
	I0925 12:30:19.937243    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88e1d5f8332c"
	I0925 12:30:19.948617    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:30:19.948631    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:30:19.960339    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:30:19.960349    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:30:19.971877    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:30:19.971892    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:30:20.008596    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:30:20.008605    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:30:20.020157    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:30:20.020168    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:30:20.037696    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:30:20.037709    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:30:20.051595    5014 logs.go:123] Gathering logs for coredns [52d9a945531c] ...
	I0925 12:30:20.051608    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d9a945531c"
	I0925 12:30:20.068468    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:30:20.068484    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:30:20.086799    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:30:20.086813    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:30:20.111971    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:30:20.111984    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:30:20.116344    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:30:20.116352    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:30:20.154387    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:30:20.154402    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:30:20.169702    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:30:20.169711    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:30:22.683809    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:30:27.686131    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:30:27.686385    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:30:27.703710    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:30:27.703811    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:30:27.717048    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:30:27.717131    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:30:27.729243    5014 logs.go:276] 4 containers: [52d9a945531c 88e1d5f8332c 5516aacf7bec 2165303b7771]
	I0925 12:30:27.729320    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:30:27.741215    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:30:27.741306    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:30:27.754782    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:30:27.754871    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:30:27.767326    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:30:27.767404    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:30:27.778319    5014 logs.go:276] 0 containers: []
	W0925 12:30:27.778332    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:30:27.778404    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:30:27.792705    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:30:27.792726    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:30:27.792732    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:30:27.829215    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:30:27.829229    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:30:27.841908    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:30:27.841923    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:30:27.854192    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:30:27.854205    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:30:27.866966    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:30:27.866982    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:30:27.885924    5014 logs.go:123] Gathering logs for coredns [52d9a945531c] ...
	I0925 12:30:27.885940    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d9a945531c"
	I0925 12:30:27.898702    5014 logs.go:123] Gathering logs for coredns [88e1d5f8332c] ...
	I0925 12:30:27.898717    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88e1d5f8332c"
	I0925 12:30:27.910904    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:30:27.910918    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:30:27.924000    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:30:27.924010    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:30:27.943771    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:30:27.943784    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:30:27.948707    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:30:27.948720    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:30:27.969062    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:30:27.969077    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:30:28.009711    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:30:28.009731    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:30:28.022915    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:30:28.022929    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:30:28.042459    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:30:28.042472    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:30:30.570252    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:30:35.572628    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:30:35.573220    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:30:35.614880    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:30:35.615038    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:30:35.640194    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:30:35.640324    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:30:35.655094    5014 logs.go:276] 4 containers: [52d9a945531c 88e1d5f8332c 5516aacf7bec 2165303b7771]
	I0925 12:30:35.655186    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:30:35.667195    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:30:35.667277    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:30:35.678479    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:30:35.678557    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:30:35.689108    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:30:35.689191    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:30:35.699200    5014 logs.go:276] 0 containers: []
	W0925 12:30:35.699214    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:30:35.699274    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:30:35.713612    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:30:35.713629    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:30:35.713634    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:30:35.748706    5014 logs.go:123] Gathering logs for coredns [52d9a945531c] ...
	I0925 12:30:35.748722    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d9a945531c"
	I0925 12:30:35.760227    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:30:35.760241    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:30:35.772934    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:30:35.772947    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:30:35.792146    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:30:35.792162    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:30:35.817355    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:30:35.817364    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:30:35.828721    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:30:35.828735    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:30:35.832942    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:30:35.832950    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:30:35.847019    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:30:35.847030    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:30:35.858281    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:30:35.858291    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:30:35.877945    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:30:35.877955    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:30:35.892154    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:30:35.892163    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:30:35.906545    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:30:35.906554    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:30:35.917947    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:30:35.917957    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:30:35.954352    5014 logs.go:123] Gathering logs for coredns [88e1d5f8332c] ...
	I0925 12:30:35.954362    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88e1d5f8332c"
	I0925 12:30:38.473244    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:30:43.485176    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:30:43.485677    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:30:43.516223    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:30:43.516363    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:30:43.536410    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:30:43.536519    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:30:43.552614    5014 logs.go:276] 4 containers: [52d9a945531c 88e1d5f8332c 5516aacf7bec 2165303b7771]
	I0925 12:30:43.552710    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:30:43.564529    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:30:43.564610    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:30:43.574854    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:30:43.574930    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:30:43.585588    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:30:43.585662    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:30:43.596109    5014 logs.go:276] 0 containers: []
	W0925 12:30:43.596118    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:30:43.596176    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:30:43.606201    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:30:43.606217    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:30:43.606222    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:30:43.618558    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:30:43.618571    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:30:43.632018    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:30:43.632031    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:30:43.637776    5014 logs.go:123] Gathering logs for coredns [88e1d5f8332c] ...
	I0925 12:30:43.637788    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88e1d5f8332c"
	I0925 12:30:43.673431    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:30:43.673446    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:30:43.699034    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:30:43.699045    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:30:43.713441    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:30:43.713452    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:30:43.733825    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:30:43.733840    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:30:43.750038    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:30:43.750049    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:30:43.767277    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:30:43.767288    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:30:43.783407    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:30:43.783419    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:30:43.820188    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:30:43.820200    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:30:43.854673    5014 logs.go:123] Gathering logs for coredns [52d9a945531c] ...
	I0925 12:30:43.854685    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d9a945531c"
	I0925 12:30:43.866880    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:30:43.866896    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:30:43.878768    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:30:43.878778    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:30:46.399380    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:30:51.407688    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:30:51.408205    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:30:51.448352    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:30:51.448513    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:30:51.469928    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:30:51.470051    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:30:51.486759    5014 logs.go:276] 4 containers: [52d9a945531c 88e1d5f8332c 5516aacf7bec 2165303b7771]
	I0925 12:30:51.486849    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:30:51.500160    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:30:51.500247    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:30:51.511554    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:30:51.511632    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:30:51.525879    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:30:51.525958    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:30:51.536976    5014 logs.go:276] 0 containers: []
	W0925 12:30:51.536987    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:30:51.537053    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:30:51.547570    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:30:51.547586    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:30:51.547591    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:30:51.551876    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:30:51.551881    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:30:51.566409    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:30:51.566421    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:30:51.578212    5014 logs.go:123] Gathering logs for coredns [88e1d5f8332c] ...
	I0925 12:30:51.578223    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88e1d5f8332c"
	I0925 12:30:51.590277    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:30:51.590290    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:30:51.602056    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:30:51.602065    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:30:51.613944    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:30:51.613959    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:30:51.651231    5014 logs.go:123] Gathering logs for coredns [52d9a945531c] ...
	I0925 12:30:51.651241    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d9a945531c"
	I0925 12:30:51.663846    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:30:51.663859    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:30:51.679761    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:30:51.679775    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:30:51.698047    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:30:51.698056    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:30:51.733010    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:30:51.733021    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:30:51.747559    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:30:51.747571    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:30:51.759148    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:30:51.759161    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:30:51.770673    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:30:51.770686    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:30:54.299935    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:30:59.304353    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:30:59.304856    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0925 12:30:59.341662    5014 logs.go:276] 1 containers: [2ed5fe57e6c0]
	I0925 12:30:59.341810    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0925 12:30:59.362156    5014 logs.go:276] 1 containers: [037eede8142c]
	I0925 12:30:59.362289    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0925 12:30:59.376575    5014 logs.go:276] 4 containers: [52d9a945531c 88e1d5f8332c 5516aacf7bec 2165303b7771]
	I0925 12:30:59.376680    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0925 12:30:59.388918    5014 logs.go:276] 1 containers: [5f1c4edd1eb4]
	I0925 12:30:59.389011    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0925 12:30:59.399573    5014 logs.go:276] 1 containers: [caa3fd1f1297]
	I0925 12:30:59.399646    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0925 12:30:59.410223    5014 logs.go:276] 1 containers: [79a81322c101]
	I0925 12:30:59.410297    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0925 12:30:59.420101    5014 logs.go:276] 0 containers: []
	W0925 12:30:59.420113    5014 logs.go:278] No container was found matching "kindnet"
	I0925 12:30:59.420165    5014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0925 12:30:59.435359    5014 logs.go:276] 1 containers: [d64985fba232]
	I0925 12:30:59.435377    5014 logs.go:123] Gathering logs for etcd [037eede8142c] ...
	I0925 12:30:59.435383    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 037eede8142c"
	I0925 12:30:59.449358    5014 logs.go:123] Gathering logs for coredns [52d9a945531c] ...
	I0925 12:30:59.449367    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52d9a945531c"
	I0925 12:30:59.461198    5014 logs.go:123] Gathering logs for Docker ...
	I0925 12:30:59.461208    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0925 12:30:59.483806    5014 logs.go:123] Gathering logs for coredns [88e1d5f8332c] ...
	I0925 12:30:59.483815    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88e1d5f8332c"
	I0925 12:30:59.499420    5014 logs.go:123] Gathering logs for kube-scheduler [5f1c4edd1eb4] ...
	I0925 12:30:59.499438    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f1c4edd1eb4"
	I0925 12:30:59.515077    5014 logs.go:123] Gathering logs for kube-controller-manager [79a81322c101] ...
	I0925 12:30:59.515087    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79a81322c101"
	I0925 12:30:59.535090    5014 logs.go:123] Gathering logs for kubelet ...
	I0925 12:30:59.535101    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0925 12:30:59.571617    5014 logs.go:123] Gathering logs for dmesg ...
	I0925 12:30:59.571625    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0925 12:30:59.576078    5014 logs.go:123] Gathering logs for describe nodes ...
	I0925 12:30:59.576085    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0925 12:30:59.613543    5014 logs.go:123] Gathering logs for coredns [2165303b7771] ...
	I0925 12:30:59.613557    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2165303b7771"
	I0925 12:30:59.625650    5014 logs.go:123] Gathering logs for kube-apiserver [2ed5fe57e6c0] ...
	I0925 12:30:59.625661    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed5fe57e6c0"
	I0925 12:30:59.639599    5014 logs.go:123] Gathering logs for coredns [5516aacf7bec] ...
	I0925 12:30:59.639611    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5516aacf7bec"
	I0925 12:30:59.650974    5014 logs.go:123] Gathering logs for kube-proxy [caa3fd1f1297] ...
	I0925 12:30:59.650988    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa3fd1f1297"
	I0925 12:30:59.663127    5014 logs.go:123] Gathering logs for storage-provisioner [d64985fba232] ...
	I0925 12:30:59.663137    5014 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64985fba232"
	I0925 12:30:59.674605    5014 logs.go:123] Gathering logs for container status ...
	I0925 12:30:59.674615    5014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0925 12:31:02.188002    5014 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0925 12:31:07.192320    5014 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0925 12:31:07.203943    5014 out.go:201] 
	W0925 12:31:07.206795    5014 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0925 12:31:07.206802    5014 out.go:270] * 
	* 
	W0925 12:31:07.207227    5014 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:31:07.223812    5014 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-814000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (575.58s)

                                                
                                    
x
+
TestPause/serial/Start (9.93s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-002000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-002000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.863682583s)

                                                
                                                
-- stdout --
	* [pause-002000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-002000" primary control-plane node in "pause-002000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-002000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-002000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-002000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-002000 -n pause-002000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-002000 -n pause-002000: exit status 7 (67.598042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-002000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-078000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-078000 --driver=qemu2 : exit status 80 (9.882053291s)

                                                
                                                
-- stdout --
	* [NoKubernetes-078000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-078000" primary control-plane node in "NoKubernetes-078000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-078000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-078000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-078000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-078000 -n NoKubernetes-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-078000 -n NoKubernetes-078000: exit status 7 (65.478416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-078000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-078000 --no-kubernetes --driver=qemu2 : exit status 80 (5.226616458s)

                                                
                                                
-- stdout --
	* [NoKubernetes-078000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-078000
	* Restarting existing qemu2 VM for "NoKubernetes-078000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-078000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-078000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-078000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-078000 -n NoKubernetes-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-078000 -n NoKubernetes-078000: exit status 7 (30.01925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-078000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-078000 --no-kubernetes --driver=qemu2 : exit status 80 (5.247548917s)

                                                
                                                
-- stdout --
	* [NoKubernetes-078000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-078000
	* Restarting existing qemu2 VM for "NoKubernetes-078000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-078000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-078000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-078000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-078000 -n NoKubernetes-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-078000 -n NoKubernetes-078000: exit status 7 (65.144916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-078000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-078000 --driver=qemu2 : exit status 80 (5.245879542s)

                                                
                                                
-- stdout --
	* [NoKubernetes-078000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-078000
	* Restarting existing qemu2 VM for "NoKubernetes-078000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-078000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-078000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-078000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-078000 -n NoKubernetes-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-078000 -n NoKubernetes-078000: exit status 7 (59.867292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.841639583s)

                                                
                                                
-- stdout --
	* [auto-811000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-811000" primary control-plane node in "auto-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:29:29.515176    5202 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:29:29.515303    5202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:29:29.515306    5202 out.go:358] Setting ErrFile to fd 2...
	I0925 12:29:29.515309    5202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:29:29.515454    5202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:29:29.516544    5202 out.go:352] Setting JSON to false
	I0925 12:29:29.533251    5202 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5340,"bootTime":1727287229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:29:29.533316    5202 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:29:29.540556    5202 out.go:177] * [auto-811000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:29:29.548358    5202 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:29:29.548429    5202 notify.go:220] Checking for updates...
	I0925 12:29:29.554345    5202 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:29:29.557357    5202 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:29:29.560351    5202 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:29:29.563381    5202 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:29:29.566379    5202 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:29:29.569555    5202 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:29:29.569623    5202 config.go:182] Loaded profile config "stopped-upgrade-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:29:29.569677    5202 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:29:29.574273    5202 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:29:29.581274    5202 start.go:297] selected driver: qemu2
	I0925 12:29:29.581281    5202 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:29:29.581287    5202 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:29:29.583464    5202 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:29:29.586267    5202 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:29:29.589492    5202 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:29:29.589512    5202 cni.go:84] Creating CNI manager for ""
	I0925 12:29:29.589536    5202 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:29:29.589543    5202 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 12:29:29.589577    5202 start.go:340] cluster config:
	{Name:auto-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:29:29.593268    5202 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:29:29.600330    5202 out.go:177] * Starting "auto-811000" primary control-plane node in "auto-811000" cluster
	I0925 12:29:29.604195    5202 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:29:29.604211    5202 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:29:29.604220    5202 cache.go:56] Caching tarball of preloaded images
	I0925 12:29:29.604297    5202 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:29:29.604303    5202 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:29:29.604364    5202 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/auto-811000/config.json ...
	I0925 12:29:29.604376    5202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/auto-811000/config.json: {Name:mk75881e029249e4c0820345f2f24c2b2cab397f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:29:29.604970    5202 start.go:360] acquireMachinesLock for auto-811000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:29:29.605001    5202 start.go:364] duration metric: took 26.375µs to acquireMachinesLock for "auto-811000"
	I0925 12:29:29.605013    5202 start.go:93] Provisioning new machine with config: &{Name:auto-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:29:29.605047    5202 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:29:29.613194    5202 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 12:29:29.629916    5202 start.go:159] libmachine.API.Create for "auto-811000" (driver="qemu2")
	I0925 12:29:29.629949    5202 client.go:168] LocalClient.Create starting
	I0925 12:29:29.630012    5202 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:29:29.630047    5202 main.go:141] libmachine: Decoding PEM data...
	I0925 12:29:29.630057    5202 main.go:141] libmachine: Parsing certificate...
	I0925 12:29:29.630090    5202 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:29:29.630113    5202 main.go:141] libmachine: Decoding PEM data...
	I0925 12:29:29.630121    5202 main.go:141] libmachine: Parsing certificate...
	I0925 12:29:29.630517    5202 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:29:29.793330    5202 main.go:141] libmachine: Creating SSH key...
	I0925 12:29:29.869847    5202 main.go:141] libmachine: Creating Disk image...
	I0925 12:29:29.869853    5202 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:29:29.870046    5202 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/auto-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/auto-811000/disk.qcow2
	I0925 12:29:29.879125    5202 main.go:141] libmachine: STDOUT: 
	I0925 12:29:29.879148    5202 main.go:141] libmachine: STDERR: 
	I0925 12:29:29.879207    5202 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/auto-811000/disk.qcow2 +20000M
	I0925 12:29:29.887015    5202 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:29:29.887028    5202 main.go:141] libmachine: STDERR: 
	I0925 12:29:29.887040    5202 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/auto-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/auto-811000/disk.qcow2
	I0925 12:29:29.887045    5202 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:29:29.887056    5202 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:29:29.887079    5202 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/auto-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/auto-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/auto-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:dc:ea:52:f8:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/auto-811000/disk.qcow2
	I0925 12:29:29.888623    5202 main.go:141] libmachine: STDOUT: 
	I0925 12:29:29.888638    5202 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:29:29.888658    5202 client.go:171] duration metric: took 258.70775ms to LocalClient.Create
	I0925 12:29:31.890834    5202 start.go:128] duration metric: took 2.285798792s to createHost
	I0925 12:29:31.890953    5202 start.go:83] releasing machines lock for "auto-811000", held for 2.285983s
	W0925 12:29:31.891022    5202 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:29:31.902176    5202 out.go:177] * Deleting "auto-811000" in qemu2 ...
	W0925 12:29:31.931424    5202 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:29:31.931454    5202 start.go:729] Will try again in 5 seconds ...
	I0925 12:29:36.933545    5202 start.go:360] acquireMachinesLock for auto-811000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:29:36.934162    5202 start.go:364] duration metric: took 491.583µs to acquireMachinesLock for "auto-811000"
	I0925 12:29:36.934281    5202 start.go:93] Provisioning new machine with config: &{Name:auto-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:29:36.934550    5202 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:29:36.940246    5202 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 12:29:36.990206    5202 start.go:159] libmachine.API.Create for "auto-811000" (driver="qemu2")
	I0925 12:29:36.990267    5202 client.go:168] LocalClient.Create starting
	I0925 12:29:36.990406    5202 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:29:36.990483    5202 main.go:141] libmachine: Decoding PEM data...
	I0925 12:29:36.990502    5202 main.go:141] libmachine: Parsing certificate...
	I0925 12:29:36.990568    5202 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:29:36.990616    5202 main.go:141] libmachine: Decoding PEM data...
	I0925 12:29:36.990629    5202 main.go:141] libmachine: Parsing certificate...
	I0925 12:29:36.991214    5202 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:29:37.163930    5202 main.go:141] libmachine: Creating SSH key...
	I0925 12:29:37.257887    5202 main.go:141] libmachine: Creating Disk image...
	I0925 12:29:37.257893    5202 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:29:37.258106    5202 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/auto-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/auto-811000/disk.qcow2
	I0925 12:29:37.267542    5202 main.go:141] libmachine: STDOUT: 
	I0925 12:29:37.267561    5202 main.go:141] libmachine: STDERR: 
	I0925 12:29:37.267629    5202 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/auto-811000/disk.qcow2 +20000M
	I0925 12:29:37.275866    5202 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:29:37.275888    5202 main.go:141] libmachine: STDERR: 
	I0925 12:29:37.275910    5202 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/auto-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/auto-811000/disk.qcow2
	I0925 12:29:37.275916    5202 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:29:37.275922    5202 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:29:37.275948    5202 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/auto-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/auto-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/auto-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:ab:f8:c1:53:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/auto-811000/disk.qcow2
	I0925 12:29:37.277669    5202 main.go:141] libmachine: STDOUT: 
	I0925 12:29:37.277684    5202 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:29:37.277698    5202 client.go:171] duration metric: took 287.432208ms to LocalClient.Create
	I0925 12:29:39.279860    5202 start.go:128] duration metric: took 2.345308875s to createHost
	I0925 12:29:39.279960    5202 start.go:83] releasing machines lock for "auto-811000", held for 2.345774791s
	W0925 12:29:39.280292    5202 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:29:39.296089    5202 out.go:201] 
	W0925 12:29:39.299225    5202 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:29:39.299264    5202 out.go:270] * 
	* 
	W0925 12:29:39.301794    5202 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:29:39.315001    5202 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.786647042s)

                                                
                                                
-- stdout --
	* [flannel-811000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-811000" primary control-plane node in "flannel-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:29:41.515637    5311 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:29:41.515782    5311 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:29:41.515785    5311 out.go:358] Setting ErrFile to fd 2...
	I0925 12:29:41.515788    5311 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:29:41.515908    5311 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:29:41.516935    5311 out.go:352] Setting JSON to false
	I0925 12:29:41.533400    5311 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5352,"bootTime":1727287229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:29:41.533469    5311 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:29:41.541141    5311 out.go:177] * [flannel-811000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:29:41.549856    5311 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:29:41.549903    5311 notify.go:220] Checking for updates...
	I0925 12:29:41.554810    5311 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:29:41.557865    5311 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:29:41.560867    5311 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:29:41.563905    5311 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:29:41.566790    5311 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:29:41.570251    5311 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:29:41.570317    5311 config.go:182] Loaded profile config "stopped-upgrade-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:29:41.570373    5311 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:29:41.574781    5311 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:29:41.581850    5311 start.go:297] selected driver: qemu2
	I0925 12:29:41.581858    5311 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:29:41.581864    5311 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:29:41.584053    5311 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:29:41.586825    5311 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:29:41.589979    5311 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:29:41.590004    5311 cni.go:84] Creating CNI manager for "flannel"
	I0925 12:29:41.590009    5311 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0925 12:29:41.590057    5311 start.go:340] cluster config:
	{Name:flannel-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:29:41.593850    5311 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:29:41.599813    5311 out.go:177] * Starting "flannel-811000" primary control-plane node in "flannel-811000" cluster
	I0925 12:29:41.603813    5311 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:29:41.603827    5311 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:29:41.603835    5311 cache.go:56] Caching tarball of preloaded images
	I0925 12:29:41.603893    5311 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:29:41.603899    5311 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:29:41.603953    5311 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/flannel-811000/config.json ...
	I0925 12:29:41.603964    5311 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/flannel-811000/config.json: {Name:mk34b9f235accca17dae13257dfa9ef008f0a2f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:29:41.604192    5311 start.go:360] acquireMachinesLock for flannel-811000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:29:41.604226    5311 start.go:364] duration metric: took 27.709µs to acquireMachinesLock for "flannel-811000"
	I0925 12:29:41.604239    5311 start.go:93] Provisioning new machine with config: &{Name:flannel-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:29:41.604269    5311 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:29:41.612800    5311 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 12:29:41.628387    5311 start.go:159] libmachine.API.Create for "flannel-811000" (driver="qemu2")
	I0925 12:29:41.628413    5311 client.go:168] LocalClient.Create starting
	I0925 12:29:41.628475    5311 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:29:41.628507    5311 main.go:141] libmachine: Decoding PEM data...
	I0925 12:29:41.628517    5311 main.go:141] libmachine: Parsing certificate...
	I0925 12:29:41.628555    5311 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:29:41.628578    5311 main.go:141] libmachine: Decoding PEM data...
	I0925 12:29:41.628590    5311 main.go:141] libmachine: Parsing certificate...
	I0925 12:29:41.628912    5311 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:29:41.794114    5311 main.go:141] libmachine: Creating SSH key...
	I0925 12:29:41.837855    5311 main.go:141] libmachine: Creating Disk image...
	I0925 12:29:41.837865    5311 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:29:41.838045    5311 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/flannel-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/flannel-811000/disk.qcow2
	I0925 12:29:41.847652    5311 main.go:141] libmachine: STDOUT: 
	I0925 12:29:41.847674    5311 main.go:141] libmachine: STDERR: 
	I0925 12:29:41.847733    5311 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/flannel-811000/disk.qcow2 +20000M
	I0925 12:29:41.855870    5311 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:29:41.855888    5311 main.go:141] libmachine: STDERR: 
	I0925 12:29:41.855915    5311 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/flannel-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/flannel-811000/disk.qcow2
	I0925 12:29:41.855920    5311 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:29:41.855933    5311 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:29:41.855963    5311 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/flannel-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/flannel-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/flannel-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:f4:fd:66:48:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/flannel-811000/disk.qcow2
	I0925 12:29:41.857540    5311 main.go:141] libmachine: STDOUT: 
	I0925 12:29:41.857554    5311 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:29:41.857574    5311 client.go:171] duration metric: took 229.159208ms to LocalClient.Create
	I0925 12:29:43.859740    5311 start.go:128] duration metric: took 2.255485875s to createHost
	I0925 12:29:43.859854    5311 start.go:83] releasing machines lock for "flannel-811000", held for 2.255660542s
	W0925 12:29:43.859929    5311 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:29:43.871860    5311 out.go:177] * Deleting "flannel-811000" in qemu2 ...
	W0925 12:29:43.903749    5311 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:29:43.903791    5311 start.go:729] Will try again in 5 seconds ...
	I0925 12:29:48.905858    5311 start.go:360] acquireMachinesLock for flannel-811000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:29:48.906104    5311 start.go:364] duration metric: took 206.375µs to acquireMachinesLock for "flannel-811000"
	I0925 12:29:48.906130    5311 start.go:93] Provisioning new machine with config: &{Name:flannel-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:29:48.906210    5311 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:29:48.917499    5311 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 12:29:48.943983    5311 start.go:159] libmachine.API.Create for "flannel-811000" (driver="qemu2")
	I0925 12:29:48.944016    5311 client.go:168] LocalClient.Create starting
	I0925 12:29:48.944109    5311 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:29:48.944157    5311 main.go:141] libmachine: Decoding PEM data...
	I0925 12:29:48.944169    5311 main.go:141] libmachine: Parsing certificate...
	I0925 12:29:48.944215    5311 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:29:48.944245    5311 main.go:141] libmachine: Decoding PEM data...
	I0925 12:29:48.944255    5311 main.go:141] libmachine: Parsing certificate...
	I0925 12:29:48.944607    5311 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:29:49.109405    5311 main.go:141] libmachine: Creating SSH key...
	I0925 12:29:49.208214    5311 main.go:141] libmachine: Creating Disk image...
	I0925 12:29:49.208221    5311 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:29:49.208419    5311 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/flannel-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/flannel-811000/disk.qcow2
	I0925 12:29:49.217575    5311 main.go:141] libmachine: STDOUT: 
	I0925 12:29:49.217598    5311 main.go:141] libmachine: STDERR: 
	I0925 12:29:49.217660    5311 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/flannel-811000/disk.qcow2 +20000M
	I0925 12:29:49.225604    5311 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:29:49.225632    5311 main.go:141] libmachine: STDERR: 
	I0925 12:29:49.225646    5311 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/flannel-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/flannel-811000/disk.qcow2
	I0925 12:29:49.225661    5311 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:29:49.225673    5311 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:29:49.225703    5311 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/flannel-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/flannel-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/flannel-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:f3:8c:03:f5:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/flannel-811000/disk.qcow2
	I0925 12:29:49.227378    5311 main.go:141] libmachine: STDOUT: 
	I0925 12:29:49.227394    5311 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:29:49.227409    5311 client.go:171] duration metric: took 283.393834ms to LocalClient.Create
	I0925 12:29:51.229575    5311 start.go:128] duration metric: took 2.3233775s to createHost
	I0925 12:29:51.229691    5311 start.go:83] releasing machines lock for "flannel-811000", held for 2.323615666s
	W0925 12:29:51.230044    5311 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:29:51.239725    5311 out.go:201] 
	W0925 12:29:51.249653    5311 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:29:51.249683    5311 out.go:270] * 
	* 
	W0925 12:29:51.252305    5311 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:29:51.260646    5311 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.800326s)

                                                
                                                
-- stdout --
	* [kindnet-811000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-811000" primary control-plane node in "kindnet-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:29:53.607404    5428 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:29:53.607536    5428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:29:53.607540    5428 out.go:358] Setting ErrFile to fd 2...
	I0925 12:29:53.607542    5428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:29:53.607673    5428 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:29:53.608718    5428 out.go:352] Setting JSON to false
	I0925 12:29:53.624863    5428 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5364,"bootTime":1727287229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:29:53.624948    5428 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:29:53.630327    5428 out.go:177] * [kindnet-811000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:29:53.639041    5428 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:29:53.639088    5428 notify.go:220] Checking for updates...
	I0925 12:29:53.645048    5428 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:29:53.648042    5428 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:29:53.651039    5428 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:29:53.652385    5428 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:29:53.655081    5428 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:29:53.658328    5428 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:29:53.658394    5428 config.go:182] Loaded profile config "stopped-upgrade-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:29:53.658441    5428 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:29:53.662942    5428 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:29:53.670040    5428 start.go:297] selected driver: qemu2
	I0925 12:29:53.670045    5428 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:29:53.670050    5428 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:29:53.672180    5428 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:29:53.675003    5428 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:29:53.678144    5428 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:29:53.678170    5428 cni.go:84] Creating CNI manager for "kindnet"
	I0925 12:29:53.678183    5428 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0925 12:29:53.678211    5428 start.go:340] cluster config:
	{Name:kindnet-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:29:53.681827    5428 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:29:53.688062    5428 out.go:177] * Starting "kindnet-811000" primary control-plane node in "kindnet-811000" cluster
	I0925 12:29:53.692014    5428 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:29:53.692036    5428 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:29:53.692053    5428 cache.go:56] Caching tarball of preloaded images
	I0925 12:29:53.692125    5428 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:29:53.692130    5428 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:29:53.692191    5428 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/kindnet-811000/config.json ...
	I0925 12:29:53.692201    5428 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/kindnet-811000/config.json: {Name:mkc139380616ed0f7f8fb69238b3220b2eccc5f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:29:53.692475    5428 start.go:360] acquireMachinesLock for kindnet-811000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:29:53.692509    5428 start.go:364] duration metric: took 28.083µs to acquireMachinesLock for "kindnet-811000"
	I0925 12:29:53.692520    5428 start.go:93] Provisioning new machine with config: &{Name:kindnet-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:29:53.692557    5428 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:29:53.700047    5428 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 12:29:53.717538    5428 start.go:159] libmachine.API.Create for "kindnet-811000" (driver="qemu2")
	I0925 12:29:53.717565    5428 client.go:168] LocalClient.Create starting
	I0925 12:29:53.717635    5428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:29:53.717679    5428 main.go:141] libmachine: Decoding PEM data...
	I0925 12:29:53.717688    5428 main.go:141] libmachine: Parsing certificate...
	I0925 12:29:53.717719    5428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:29:53.717750    5428 main.go:141] libmachine: Decoding PEM data...
	I0925 12:29:53.717759    5428 main.go:141] libmachine: Parsing certificate...
	I0925 12:29:53.718124    5428 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:29:53.880411    5428 main.go:141] libmachine: Creating SSH key...
	I0925 12:29:53.969672    5428 main.go:141] libmachine: Creating Disk image...
	I0925 12:29:53.969679    5428 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:29:53.969865    5428 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kindnet-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kindnet-811000/disk.qcow2
	I0925 12:29:53.979063    5428 main.go:141] libmachine: STDOUT: 
	I0925 12:29:53.979079    5428 main.go:141] libmachine: STDERR: 
	I0925 12:29:53.979127    5428 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kindnet-811000/disk.qcow2 +20000M
	I0925 12:29:53.986870    5428 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:29:53.986886    5428 main.go:141] libmachine: STDERR: 
	I0925 12:29:53.986903    5428 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kindnet-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kindnet-811000/disk.qcow2
	I0925 12:29:53.986908    5428 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:29:53.986918    5428 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:29:53.986955    5428 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kindnet-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kindnet-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kindnet-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:63:16:d3:53:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kindnet-811000/disk.qcow2
	I0925 12:29:53.988527    5428 main.go:141] libmachine: STDOUT: 
	I0925 12:29:53.988540    5428 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:29:53.988560    5428 client.go:171] duration metric: took 270.993583ms to LocalClient.Create
	I0925 12:29:55.990797    5428 start.go:128] duration metric: took 2.298249542s to createHost
	I0925 12:29:55.990885    5428 start.go:83] releasing machines lock for "kindnet-811000", held for 2.29840925s
	W0925 12:29:55.990943    5428 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:29:56.002034    5428 out.go:177] * Deleting "kindnet-811000" in qemu2 ...
	W0925 12:29:56.044046    5428 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:29:56.044076    5428 start.go:729] Will try again in 5 seconds ...
	I0925 12:30:01.046183    5428 start.go:360] acquireMachinesLock for kindnet-811000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:30:01.046479    5428 start.go:364] duration metric: took 213.5µs to acquireMachinesLock for "kindnet-811000"
	I0925 12:30:01.046553    5428 start.go:93] Provisioning new machine with config: &{Name:kindnet-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:30:01.046756    5428 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:30:01.056056    5428 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 12:30:01.096831    5428 start.go:159] libmachine.API.Create for "kindnet-811000" (driver="qemu2")
	I0925 12:30:01.096885    5428 client.go:168] LocalClient.Create starting
	I0925 12:30:01.097015    5428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:30:01.097082    5428 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:01.097102    5428 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:01.097167    5428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:30:01.097211    5428 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:01.097224    5428 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:01.097875    5428 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:30:01.268074    5428 main.go:141] libmachine: Creating SSH key...
	I0925 12:30:01.319134    5428 main.go:141] libmachine: Creating Disk image...
	I0925 12:30:01.319141    5428 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:30:01.319315    5428 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kindnet-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kindnet-811000/disk.qcow2
	I0925 12:30:01.328174    5428 main.go:141] libmachine: STDOUT: 
	I0925 12:30:01.328191    5428 main.go:141] libmachine: STDERR: 
	I0925 12:30:01.328257    5428 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kindnet-811000/disk.qcow2 +20000M
	I0925 12:30:01.336523    5428 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:30:01.336541    5428 main.go:141] libmachine: STDERR: 
	I0925 12:30:01.336564    5428 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kindnet-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kindnet-811000/disk.qcow2
	I0925 12:30:01.336569    5428 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:30:01.336578    5428 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:30:01.336604    5428 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kindnet-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kindnet-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kindnet-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:ee:5b:fb:17:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kindnet-811000/disk.qcow2
	I0925 12:30:01.338256    5428 main.go:141] libmachine: STDOUT: 
	I0925 12:30:01.338271    5428 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:30:01.338283    5428 client.go:171] duration metric: took 241.396875ms to LocalClient.Create
	I0925 12:30:03.340412    5428 start.go:128] duration metric: took 2.293671042s to createHost
	I0925 12:30:03.340495    5428 start.go:83] releasing machines lock for "kindnet-811000", held for 2.294042958s
	W0925 12:30:03.340748    5428 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:30:03.354117    5428 out.go:201] 
	W0925 12:30:03.358350    5428 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:30:03.358384    5428 out.go:270] * 
	* 
	W0925 12:30:03.359652    5428 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:30:03.369243    5428 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.921969125s)

                                                
                                                
-- stdout --
	* [enable-default-cni-811000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-811000" primary control-plane node in "enable-default-cni-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:30:05.711630    5843 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:30:05.711752    5843 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:30:05.711755    5843 out.go:358] Setting ErrFile to fd 2...
	I0925 12:30:05.711757    5843 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:30:05.711897    5843 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:30:05.712959    5843 out.go:352] Setting JSON to false
	I0925 12:30:05.729412    5843 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5376,"bootTime":1727287229,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:30:05.729477    5843 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:30:05.734912    5843 out.go:177] * [enable-default-cni-811000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:30:05.742748    5843 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:30:05.742799    5843 notify.go:220] Checking for updates...
	I0925 12:30:05.748779    5843 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:30:05.751737    5843 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:30:05.754726    5843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:30:05.757682    5843 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:30:05.760736    5843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:30:05.764057    5843 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:30:05.764120    5843 config.go:182] Loaded profile config "stopped-upgrade-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:30:05.764164    5843 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:30:05.768663    5843 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:30:05.775722    5843 start.go:297] selected driver: qemu2
	I0925 12:30:05.775729    5843 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:30:05.775735    5843 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:30:05.777842    5843 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:30:05.780708    5843 out.go:177] * Automatically selected the socket_vmnet network
	E0925 12:30:05.783781    5843 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0925 12:30:05.783795    5843 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:30:05.783812    5843 cni.go:84] Creating CNI manager for "bridge"
	I0925 12:30:05.783820    5843 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 12:30:05.783848    5843 start.go:340] cluster config:
	{Name:enable-default-cni-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:30:05.787395    5843 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:30:05.790760    5843 out.go:177] * Starting "enable-default-cni-811000" primary control-plane node in "enable-default-cni-811000" cluster
	I0925 12:30:05.798728    5843 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:30:05.798745    5843 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:30:05.798750    5843 cache.go:56] Caching tarball of preloaded images
	I0925 12:30:05.798811    5843 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:30:05.798816    5843 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:30:05.798865    5843 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/enable-default-cni-811000/config.json ...
	I0925 12:30:05.798876    5843 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/enable-default-cni-811000/config.json: {Name:mkb26605fc8e75d5ea351bd7cd31fc9ffdf14c59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:30:05.799191    5843 start.go:360] acquireMachinesLock for enable-default-cni-811000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:30:05.799227    5843 start.go:364] duration metric: took 28.875µs to acquireMachinesLock for "enable-default-cni-811000"
	I0925 12:30:05.799240    5843 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:30:05.799273    5843 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:30:05.807709    5843 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 12:30:05.823124    5843 start.go:159] libmachine.API.Create for "enable-default-cni-811000" (driver="qemu2")
	I0925 12:30:05.823151    5843 client.go:168] LocalClient.Create starting
	I0925 12:30:05.823209    5843 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:30:05.823242    5843 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:05.823255    5843 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:05.823299    5843 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:30:05.823325    5843 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:05.823332    5843 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:05.823725    5843 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:30:05.985102    5843 main.go:141] libmachine: Creating SSH key...
	I0925 12:30:06.143883    5843 main.go:141] libmachine: Creating Disk image...
	I0925 12:30:06.143893    5843 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:30:06.144113    5843 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/enable-default-cni-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/enable-default-cni-811000/disk.qcow2
	I0925 12:30:06.153996    5843 main.go:141] libmachine: STDOUT: 
	I0925 12:30:06.154015    5843 main.go:141] libmachine: STDERR: 
	I0925 12:30:06.154069    5843 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/enable-default-cni-811000/disk.qcow2 +20000M
	I0925 12:30:06.162299    5843 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:30:06.162316    5843 main.go:141] libmachine: STDERR: 
	I0925 12:30:06.162339    5843 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/enable-default-cni-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/enable-default-cni-811000/disk.qcow2
	I0925 12:30:06.162345    5843 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:30:06.162356    5843 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:30:06.162383    5843 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/enable-default-cni-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/enable-default-cni-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/enable-default-cni-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:5f:04:7e:ba:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/enable-default-cni-811000/disk.qcow2
	I0925 12:30:06.164146    5843 main.go:141] libmachine: STDOUT: 
	I0925 12:30:06.164174    5843 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:30:06.164198    5843 client.go:171] duration metric: took 341.046959ms to LocalClient.Create
	I0925 12:30:08.166374    5843 start.go:128] duration metric: took 2.367112208s to createHost
	I0925 12:30:08.166466    5843 start.go:83] releasing machines lock for "enable-default-cni-811000", held for 2.367272417s
	W0925 12:30:08.166544    5843 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:30:08.187857    5843 out.go:177] * Deleting "enable-default-cni-811000" in qemu2 ...
	W0925 12:30:08.221948    5843 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:30:08.221982    5843 start.go:729] Will try again in 5 seconds ...
	I0925 12:30:13.222318    5843 start.go:360] acquireMachinesLock for enable-default-cni-811000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:30:13.222951    5843 start.go:364] duration metric: took 532.333µs to acquireMachinesLock for "enable-default-cni-811000"
	I0925 12:30:13.223096    5843 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:30:13.223380    5843 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:30:13.230985    5843 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 12:30:13.280978    5843 start.go:159] libmachine.API.Create for "enable-default-cni-811000" (driver="qemu2")
	I0925 12:30:13.281160    5843 client.go:168] LocalClient.Create starting
	I0925 12:30:13.281325    5843 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:30:13.281393    5843 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:13.281413    5843 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:13.281477    5843 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:30:13.281523    5843 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:13.281536    5843 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:13.282061    5843 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:30:13.453020    5843 main.go:141] libmachine: Creating SSH key...
	I0925 12:30:13.537239    5843 main.go:141] libmachine: Creating Disk image...
	I0925 12:30:13.537246    5843 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:30:13.537437    5843 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/enable-default-cni-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/enable-default-cni-811000/disk.qcow2
	I0925 12:30:13.546561    5843 main.go:141] libmachine: STDOUT: 
	I0925 12:30:13.546580    5843 main.go:141] libmachine: STDERR: 
	I0925 12:30:13.546645    5843 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/enable-default-cni-811000/disk.qcow2 +20000M
	I0925 12:30:13.554437    5843 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:30:13.554461    5843 main.go:141] libmachine: STDERR: 
	I0925 12:30:13.554475    5843 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/enable-default-cni-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/enable-default-cni-811000/disk.qcow2
	I0925 12:30:13.554479    5843 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:30:13.554488    5843 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:30:13.554518    5843 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/enable-default-cni-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/enable-default-cni-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/enable-default-cni-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:27:34:4b:be:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/enable-default-cni-811000/disk.qcow2
	I0925 12:30:13.556122    5843 main.go:141] libmachine: STDOUT: 
	I0925 12:30:13.556149    5843 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:30:13.556161    5843 client.go:171] duration metric: took 275.000917ms to LocalClient.Create
	I0925 12:30:15.558335    5843 start.go:128] duration metric: took 2.334957583s to createHost
	I0925 12:30:15.558399    5843 start.go:83] releasing machines lock for "enable-default-cni-811000", held for 2.33546675s
	W0925 12:30:15.558814    5843 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:30:15.569549    5843 out.go:201] 
	W0925 12:30:15.578685    5843 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:30:15.578716    5843 out.go:270] * 
	* 
	W0925 12:30:15.581517    5843 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:30:15.591538    5843 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.948170417s)

                                                
                                                
-- stdout --
	* [bridge-811000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-811000" primary control-plane node in "bridge-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:30:17.808167    5952 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:30:17.808322    5952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:30:17.808326    5952 out.go:358] Setting ErrFile to fd 2...
	I0925 12:30:17.808328    5952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:30:17.808440    5952 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:30:17.809489    5952 out.go:352] Setting JSON to false
	I0925 12:30:17.825364    5952 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5388,"bootTime":1727287229,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:30:17.825433    5952 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:30:17.833169    5952 out.go:177] * [bridge-811000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:30:17.840995    5952 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:30:17.841051    5952 notify.go:220] Checking for updates...
	I0925 12:30:17.848049    5952 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:30:17.851123    5952 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:30:17.854010    5952 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:30:17.857048    5952 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:30:17.859936    5952 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:30:17.863380    5952 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:30:17.863449    5952 config.go:182] Loaded profile config "stopped-upgrade-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:30:17.863494    5952 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:30:17.867945    5952 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:30:17.875039    5952 start.go:297] selected driver: qemu2
	I0925 12:30:17.875046    5952 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:30:17.875056    5952 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:30:17.877449    5952 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:30:17.881039    5952 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:30:17.884107    5952 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:30:17.884133    5952 cni.go:84] Creating CNI manager for "bridge"
	I0925 12:30:17.884137    5952 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 12:30:17.884181    5952 start.go:340] cluster config:
	{Name:bridge-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:30:17.887983    5952 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:30:17.894966    5952 out.go:177] * Starting "bridge-811000" primary control-plane node in "bridge-811000" cluster
	I0925 12:30:17.898978    5952 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:30:17.898993    5952 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:30:17.899004    5952 cache.go:56] Caching tarball of preloaded images
	I0925 12:30:17.899077    5952 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:30:17.899083    5952 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:30:17.899145    5952 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/bridge-811000/config.json ...
	I0925 12:30:17.899157    5952 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/bridge-811000/config.json: {Name:mka2286ef77dbb6572177822b0459b83c400b4ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:30:17.899566    5952 start.go:360] acquireMachinesLock for bridge-811000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:30:17.899605    5952 start.go:364] duration metric: took 31.333µs to acquireMachinesLock for "bridge-811000"
	I0925 12:30:17.899618    5952 start.go:93] Provisioning new machine with config: &{Name:bridge-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:30:17.899646    5952 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:30:17.907947    5952 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 12:30:17.925400    5952 start.go:159] libmachine.API.Create for "bridge-811000" (driver="qemu2")
	I0925 12:30:17.925431    5952 client.go:168] LocalClient.Create starting
	I0925 12:30:17.925502    5952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:30:17.925532    5952 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:17.925542    5952 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:17.925580    5952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:30:17.925604    5952 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:17.925612    5952 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:17.926019    5952 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:30:18.087766    5952 main.go:141] libmachine: Creating SSH key...
	I0925 12:30:18.146760    5952 main.go:141] libmachine: Creating Disk image...
	I0925 12:30:18.146771    5952 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:30:18.146952    5952 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/bridge-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/bridge-811000/disk.qcow2
	I0925 12:30:18.155986    5952 main.go:141] libmachine: STDOUT: 
	I0925 12:30:18.156009    5952 main.go:141] libmachine: STDERR: 
	I0925 12:30:18.156072    5952 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/bridge-811000/disk.qcow2 +20000M
	I0925 12:30:18.164032    5952 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:30:18.164046    5952 main.go:141] libmachine: STDERR: 
	I0925 12:30:18.164061    5952 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/bridge-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/bridge-811000/disk.qcow2
	I0925 12:30:18.164066    5952 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:30:18.164077    5952 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:30:18.164111    5952 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/bridge-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/bridge-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/bridge-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:72:f3:8b:ad:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/bridge-811000/disk.qcow2
	I0925 12:30:18.165707    5952 main.go:141] libmachine: STDOUT: 
	I0925 12:30:18.165721    5952 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:30:18.165742    5952 client.go:171] duration metric: took 240.309208ms to LocalClient.Create
	I0925 12:30:20.166091    5952 start.go:128] duration metric: took 2.266473875s to createHost
	I0925 12:30:20.166112    5952 start.go:83] releasing machines lock for "bridge-811000", held for 2.2665445s
	W0925 12:30:20.166142    5952 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:30:20.170733    5952 out.go:177] * Deleting "bridge-811000" in qemu2 ...
	W0925 12:30:20.192360    5952 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:30:20.192371    5952 start.go:729] Will try again in 5 seconds ...
	I0925 12:30:25.194505    5952 start.go:360] acquireMachinesLock for bridge-811000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:30:25.195021    5952 start.go:364] duration metric: took 433.291µs to acquireMachinesLock for "bridge-811000"
	I0925 12:30:25.195099    5952 start.go:93] Provisioning new machine with config: &{Name:bridge-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:30:25.195357    5952 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:30:25.210960    5952 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 12:30:25.263238    5952 start.go:159] libmachine.API.Create for "bridge-811000" (driver="qemu2")
	I0925 12:30:25.263289    5952 client.go:168] LocalClient.Create starting
	I0925 12:30:25.263400    5952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:30:25.263462    5952 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:25.263482    5952 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:25.263555    5952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:30:25.263599    5952 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:25.263613    5952 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:25.264114    5952 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:30:25.436186    5952 main.go:141] libmachine: Creating SSH key...
	I0925 12:30:25.668240    5952 main.go:141] libmachine: Creating Disk image...
	I0925 12:30:25.668257    5952 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:30:25.668508    5952 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/bridge-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/bridge-811000/disk.qcow2
	I0925 12:30:25.678441    5952 main.go:141] libmachine: STDOUT: 
	I0925 12:30:25.678458    5952 main.go:141] libmachine: STDERR: 
	I0925 12:30:25.678533    5952 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/bridge-811000/disk.qcow2 +20000M
	I0925 12:30:25.686540    5952 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:30:25.686552    5952 main.go:141] libmachine: STDERR: 
	I0925 12:30:25.686562    5952 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/bridge-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/bridge-811000/disk.qcow2
	I0925 12:30:25.686567    5952 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:30:25.686577    5952 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:30:25.686611    5952 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/bridge-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/bridge-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/bridge-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:68:af:b9:d3:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/bridge-811000/disk.qcow2
	I0925 12:30:25.688276    5952 main.go:141] libmachine: STDOUT: 
	I0925 12:30:25.688291    5952 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:30:25.688303    5952 client.go:171] duration metric: took 425.012708ms to LocalClient.Create
	I0925 12:30:27.690338    5952 start.go:128] duration metric: took 2.495013166s to createHost
	I0925 12:30:27.690366    5952 start.go:83] releasing machines lock for "bridge-811000", held for 2.495368459s
	W0925 12:30:27.690486    5952 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:30:27.704592    5952 out.go:201] 
	W0925 12:30:27.707829    5952 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:30:27.707836    5952 out.go:270] * 
	* 
	W0925 12:30:27.708420    5952 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:30:27.717748    5952 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.935950666s)

                                                
                                                
-- stdout --
	* [kubenet-811000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-811000" primary control-plane node in "kubenet-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:30:29.927826    6061 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:30:29.927948    6061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:30:29.927952    6061 out.go:358] Setting ErrFile to fd 2...
	I0925 12:30:29.927955    6061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:30:29.928083    6061 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:30:29.929098    6061 out.go:352] Setting JSON to false
	I0925 12:30:29.945902    6061 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5400,"bootTime":1727287229,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:30:29.945986    6061 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:30:29.952651    6061 out.go:177] * [kubenet-811000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:30:29.960499    6061 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:30:29.960529    6061 notify.go:220] Checking for updates...
	I0925 12:30:29.967492    6061 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:30:29.970545    6061 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:30:29.971750    6061 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:30:29.974508    6061 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:30:29.977517    6061 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:30:29.980916    6061 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:30:29.980979    6061 config.go:182] Loaded profile config "stopped-upgrade-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:30:29.981026    6061 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:30:29.985480    6061 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:30:29.992505    6061 start.go:297] selected driver: qemu2
	I0925 12:30:29.992514    6061 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:30:29.992521    6061 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:30:29.994872    6061 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:30:29.998483    6061 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:30:30.001583    6061 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:30:30.001602    6061 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0925 12:30:30.001625    6061 start.go:340] cluster config:
	{Name:kubenet-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:30:30.005146    6061 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:30:30.012526    6061 out.go:177] * Starting "kubenet-811000" primary control-plane node in "kubenet-811000" cluster
	I0925 12:30:30.016417    6061 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:30:30.016434    6061 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:30:30.016443    6061 cache.go:56] Caching tarball of preloaded images
	I0925 12:30:30.016499    6061 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:30:30.016505    6061 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:30:30.016564    6061 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/kubenet-811000/config.json ...
	I0925 12:30:30.016575    6061 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/kubenet-811000/config.json: {Name:mkc7eb87942b10be4e6ca96280349118c6b48572 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:30:30.016792    6061 start.go:360] acquireMachinesLock for kubenet-811000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:30:30.016824    6061 start.go:364] duration metric: took 26.5µs to acquireMachinesLock for "kubenet-811000"
	I0925 12:30:30.016836    6061 start.go:93] Provisioning new machine with config: &{Name:kubenet-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:30:30.016869    6061 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:30:30.024550    6061 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 12:30:30.039711    6061 start.go:159] libmachine.API.Create for "kubenet-811000" (driver="qemu2")
	I0925 12:30:30.039733    6061 client.go:168] LocalClient.Create starting
	I0925 12:30:30.039792    6061 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:30:30.039823    6061 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:30.039833    6061 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:30.039873    6061 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:30:30.039896    6061 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:30.039903    6061 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:30.040287    6061 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:30:30.203019    6061 main.go:141] libmachine: Creating SSH key...
	I0925 12:30:30.371036    6061 main.go:141] libmachine: Creating Disk image...
	I0925 12:30:30.371044    6061 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:30:30.371253    6061 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubenet-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubenet-811000/disk.qcow2
	I0925 12:30:30.381340    6061 main.go:141] libmachine: STDOUT: 
	I0925 12:30:30.381367    6061 main.go:141] libmachine: STDERR: 
	I0925 12:30:30.381439    6061 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubenet-811000/disk.qcow2 +20000M
	I0925 12:30:30.389611    6061 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:30:30.389628    6061 main.go:141] libmachine: STDERR: 
	I0925 12:30:30.389644    6061 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubenet-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubenet-811000/disk.qcow2
	I0925 12:30:30.389650    6061 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:30:30.389663    6061 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:30:30.389693    6061 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubenet-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubenet-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubenet-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:6f:ba:8b:1d:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubenet-811000/disk.qcow2
	I0925 12:30:30.391287    6061 main.go:141] libmachine: STDOUT: 
	I0925 12:30:30.391302    6061 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:30:30.391321    6061 client.go:171] duration metric: took 351.588208ms to LocalClient.Create
	I0925 12:30:32.393575    6061 start.go:128] duration metric: took 2.376715667s to createHost
	I0925 12:30:32.393662    6061 start.go:83] releasing machines lock for "kubenet-811000", held for 2.376872666s
	W0925 12:30:32.393735    6061 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:30:32.405148    6061 out.go:177] * Deleting "kubenet-811000" in qemu2 ...
	W0925 12:30:32.437441    6061 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:30:32.437477    6061 start.go:729] Will try again in 5 seconds ...
	I0925 12:30:37.439649    6061 start.go:360] acquireMachinesLock for kubenet-811000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:30:37.440147    6061 start.go:364] duration metric: took 383.75µs to acquireMachinesLock for "kubenet-811000"
	I0925 12:30:37.440242    6061 start.go:93] Provisioning new machine with config: &{Name:kubenet-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:30:37.440490    6061 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:30:37.453056    6061 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 12:30:37.503341    6061 start.go:159] libmachine.API.Create for "kubenet-811000" (driver="qemu2")
	I0925 12:30:37.503401    6061 client.go:168] LocalClient.Create starting
	I0925 12:30:37.503518    6061 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:30:37.503591    6061 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:37.503609    6061 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:37.503673    6061 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:30:37.503734    6061 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:37.503748    6061 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:37.504274    6061 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:30:37.676772    6061 main.go:141] libmachine: Creating SSH key...
	I0925 12:30:37.758079    6061 main.go:141] libmachine: Creating Disk image...
	I0925 12:30:37.758086    6061 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:30:37.758291    6061 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubenet-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubenet-811000/disk.qcow2
	I0925 12:30:37.767801    6061 main.go:141] libmachine: STDOUT: 
	I0925 12:30:37.767821    6061 main.go:141] libmachine: STDERR: 
	I0925 12:30:37.767902    6061 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubenet-811000/disk.qcow2 +20000M
	I0925 12:30:37.776784    6061 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:30:37.776801    6061 main.go:141] libmachine: STDERR: 
	I0925 12:30:37.776813    6061 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubenet-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubenet-811000/disk.qcow2
	I0925 12:30:37.776817    6061 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:30:37.776826    6061 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:30:37.776854    6061 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubenet-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubenet-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubenet-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:1a:dc:14:12:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/kubenet-811000/disk.qcow2
	I0925 12:30:37.778616    6061 main.go:141] libmachine: STDOUT: 
	I0925 12:30:37.778629    6061 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:30:37.778640    6061 client.go:171] duration metric: took 275.23975ms to LocalClient.Create
	I0925 12:30:39.783935    6061 start.go:128] duration metric: took 2.340341416s to createHost
	I0925 12:30:39.784040    6061 start.go:83] releasing machines lock for "kubenet-811000", held for 2.340802917s
	W0925 12:30:39.784371    6061 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:30:39.805017    6061 out.go:201] 
	W0925 12:30:39.815148    6061 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:30:39.815168    6061 out.go:270] * 
	* 
	W0925 12:30:39.816206    6061 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:30:39.830047    6061 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.888949833s)

                                                
                                                
-- stdout --
	* [custom-flannel-811000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-811000" primary control-plane node in "custom-flannel-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:30:42.032535    6172 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:30:42.032668    6172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:30:42.032672    6172 out.go:358] Setting ErrFile to fd 2...
	I0925 12:30:42.032674    6172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:30:42.032805    6172 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:30:42.033963    6172 out.go:352] Setting JSON to false
	I0925 12:30:42.050285    6172 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5413,"bootTime":1727287229,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:30:42.050344    6172 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:30:42.058086    6172 out.go:177] * [custom-flannel-811000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:30:42.065901    6172 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:30:42.065937    6172 notify.go:220] Checking for updates...
	I0925 12:30:42.071468    6172 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:30:42.074873    6172 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:30:42.077958    6172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:30:42.080938    6172 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:30:42.083939    6172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:30:42.087350    6172 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:30:42.087420    6172 config.go:182] Loaded profile config "stopped-upgrade-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:30:42.087467    6172 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:30:42.091960    6172 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:30:42.100031    6172 start.go:297] selected driver: qemu2
	I0925 12:30:42.100039    6172 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:30:42.100047    6172 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:30:42.102330    6172 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:30:42.105987    6172 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:30:42.109133    6172 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:30:42.109155    6172 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0925 12:30:42.109172    6172 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0925 12:30:42.109199    6172 start.go:340] cluster config:
	{Name:custom-flannel-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:30:42.112737    6172 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:30:42.118859    6172 out.go:177] * Starting "custom-flannel-811000" primary control-plane node in "custom-flannel-811000" cluster
	I0925 12:30:42.122959    6172 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:30:42.122974    6172 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:30:42.122984    6172 cache.go:56] Caching tarball of preloaded images
	I0925 12:30:42.123060    6172 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:30:42.123066    6172 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:30:42.123124    6172 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/custom-flannel-811000/config.json ...
	I0925 12:30:42.123135    6172 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/custom-flannel-811000/config.json: {Name:mk2068dbe9c783ee7df5a95e0f89733e029f34f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:30:42.123344    6172 start.go:360] acquireMachinesLock for custom-flannel-811000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:30:42.123377    6172 start.go:364] duration metric: took 25.792µs to acquireMachinesLock for "custom-flannel-811000"
	I0925 12:30:42.123389    6172 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:30:42.123412    6172 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:30:42.132009    6172 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 12:30:42.149266    6172 start.go:159] libmachine.API.Create for "custom-flannel-811000" (driver="qemu2")
	I0925 12:30:42.149297    6172 client.go:168] LocalClient.Create starting
	I0925 12:30:42.149368    6172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:30:42.149402    6172 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:42.149411    6172 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:42.149449    6172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:30:42.149472    6172 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:42.149479    6172 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:42.149825    6172 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:30:42.313416    6172 main.go:141] libmachine: Creating SSH key...
	I0925 12:30:42.398567    6172 main.go:141] libmachine: Creating Disk image...
	I0925 12:30:42.398573    6172 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:30:42.398762    6172 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/custom-flannel-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/custom-flannel-811000/disk.qcow2
	I0925 12:30:42.408017    6172 main.go:141] libmachine: STDOUT: 
	I0925 12:30:42.408032    6172 main.go:141] libmachine: STDERR: 
	I0925 12:30:42.408105    6172 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/custom-flannel-811000/disk.qcow2 +20000M
	I0925 12:30:42.416026    6172 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:30:42.416047    6172 main.go:141] libmachine: STDERR: 
	I0925 12:30:42.416065    6172 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/custom-flannel-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/custom-flannel-811000/disk.qcow2
	I0925 12:30:42.416071    6172 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:30:42.416083    6172 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:30:42.416112    6172 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/custom-flannel-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/custom-flannel-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/custom-flannel-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:83:ac:3e:d3:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/custom-flannel-811000/disk.qcow2
	I0925 12:30:42.417819    6172 main.go:141] libmachine: STDOUT: 
	I0925 12:30:42.417834    6172 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:30:42.417851    6172 client.go:171] duration metric: took 268.106833ms to LocalClient.Create
	I0925 12:30:44.423071    6172 start.go:128] duration metric: took 2.296090292s to createHost
	I0925 12:30:44.423129    6172 start.go:83] releasing machines lock for "custom-flannel-811000", held for 2.29619125s
	W0925 12:30:44.423192    6172 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:30:44.430369    6172 out.go:177] * Deleting "custom-flannel-811000" in qemu2 ...
	W0925 12:30:44.463747    6172 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:30:44.463772    6172 start.go:729] Will try again in 5 seconds ...
	I0925 12:30:49.472116    6172 start.go:360] acquireMachinesLock for custom-flannel-811000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:30:49.472402    6172 start.go:364] duration metric: took 238.125µs to acquireMachinesLock for "custom-flannel-811000"
	I0925 12:30:49.472496    6172 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:30:49.472598    6172 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:30:49.483011    6172 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 12:30:49.514202    6172 start.go:159] libmachine.API.Create for "custom-flannel-811000" (driver="qemu2")
	I0925 12:30:49.514251    6172 client.go:168] LocalClient.Create starting
	I0925 12:30:49.514333    6172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:30:49.514384    6172 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:49.514400    6172 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:49.514459    6172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:30:49.514494    6172 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:49.514506    6172 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:49.514913    6172 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:30:49.683275    6172 main.go:141] libmachine: Creating SSH key...
	I0925 12:30:49.840861    6172 main.go:141] libmachine: Creating Disk image...
	I0925 12:30:49.840870    6172 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:30:49.841079    6172 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/custom-flannel-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/custom-flannel-811000/disk.qcow2
	I0925 12:30:49.850535    6172 main.go:141] libmachine: STDOUT: 
	I0925 12:30:49.850553    6172 main.go:141] libmachine: STDERR: 
	I0925 12:30:49.850611    6172 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/custom-flannel-811000/disk.qcow2 +20000M
	I0925 12:30:49.858582    6172 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:30:49.858606    6172 main.go:141] libmachine: STDERR: 
	I0925 12:30:49.858621    6172 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/custom-flannel-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/custom-flannel-811000/disk.qcow2
	I0925 12:30:49.858627    6172 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:30:49.858634    6172 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:30:49.858659    6172 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/custom-flannel-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/custom-flannel-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/custom-flannel-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:c2:c7:44:c5:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/custom-flannel-811000/disk.qcow2
	I0925 12:30:49.860356    6172 main.go:141] libmachine: STDOUT: 
	I0925 12:30:49.860370    6172 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:30:49.860383    6172 client.go:171] duration metric: took 345.777416ms to LocalClient.Create
	I0925 12:30:51.864333    6172 start.go:128] duration metric: took 2.38943725s to createHost
	I0925 12:30:51.864351    6172 start.go:83] releasing machines lock for "custom-flannel-811000", held for 2.389655417s
	W0925 12:30:51.864458    6172 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:30:51.873831    6172 out.go:201] 
	W0925 12:30:51.883727    6172 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:30:51.883734    6172 out.go:270] * 
	* 
	W0925 12:30:51.884294    6172 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:30:51.896674    6172 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.797298458s)

                                                
                                                
-- stdout --
	* [calico-811000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-811000" primary control-plane node in "calico-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:30:54.278088    6292 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:30:54.278225    6292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:30:54.278228    6292 out.go:358] Setting ErrFile to fd 2...
	I0925 12:30:54.278231    6292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:30:54.278358    6292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:30:54.279447    6292 out.go:352] Setting JSON to false
	I0925 12:30:54.295735    6292 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5425,"bootTime":1727287229,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:30:54.295827    6292 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:30:54.301943    6292 out.go:177] * [calico-811000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:30:54.309859    6292 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:30:54.309949    6292 notify.go:220] Checking for updates...
	I0925 12:30:54.316830    6292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:30:54.319828    6292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:30:54.322787    6292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:30:54.325838    6292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:30:54.328848    6292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:30:54.332127    6292 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:30:54.332190    6292 config.go:182] Loaded profile config "stopped-upgrade-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:30:54.332236    6292 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:30:54.336813    6292 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:30:54.343844    6292 start.go:297] selected driver: qemu2
	I0925 12:30:54.343850    6292 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:30:54.343856    6292 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:30:54.345939    6292 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:30:54.348808    6292 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:30:54.351951    6292 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:30:54.351985    6292 cni.go:84] Creating CNI manager for "calico"
	I0925 12:30:54.351989    6292 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0925 12:30:54.352035    6292 start.go:340] cluster config:
	{Name:calico-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:30:54.355596    6292 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:30:54.358812    6292 out.go:177] * Starting "calico-811000" primary control-plane node in "calico-811000" cluster
	I0925 12:30:54.366625    6292 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:30:54.366640    6292 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:30:54.366646    6292 cache.go:56] Caching tarball of preloaded images
	I0925 12:30:54.366702    6292 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:30:54.366708    6292 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:30:54.366763    6292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/calico-811000/config.json ...
	I0925 12:30:54.366774    6292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/calico-811000/config.json: {Name:mk066d16cc2a857e7b8147e8201f8c859e676a89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:30:54.366975    6292 start.go:360] acquireMachinesLock for calico-811000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:30:54.367006    6292 start.go:364] duration metric: took 26.458µs to acquireMachinesLock for "calico-811000"
	I0925 12:30:54.367019    6292 start.go:93] Provisioning new machine with config: &{Name:calico-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:30:54.367047    6292 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:30:54.375906    6292 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 12:30:54.392642    6292 start.go:159] libmachine.API.Create for "calico-811000" (driver="qemu2")
	I0925 12:30:54.392674    6292 client.go:168] LocalClient.Create starting
	I0925 12:30:54.392738    6292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:30:54.392771    6292 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:54.392784    6292 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:54.392816    6292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:30:54.392842    6292 main.go:141] libmachine: Decoding PEM data...
	I0925 12:30:54.392851    6292 main.go:141] libmachine: Parsing certificate...
	I0925 12:30:54.393184    6292 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:30:54.554949    6292 main.go:141] libmachine: Creating SSH key...
	I0925 12:30:54.646952    6292 main.go:141] libmachine: Creating Disk image...
	I0925 12:30:54.646970    6292 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:30:54.647209    6292 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/calico-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/calico-811000/disk.qcow2
	I0925 12:30:54.656430    6292 main.go:141] libmachine: STDOUT: 
	I0925 12:30:54.656453    6292 main.go:141] libmachine: STDERR: 
	I0925 12:30:54.656507    6292 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/calico-811000/disk.qcow2 +20000M
	I0925 12:30:54.665209    6292 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:30:54.665230    6292 main.go:141] libmachine: STDERR: 
	I0925 12:30:54.665247    6292 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/calico-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/calico-811000/disk.qcow2
	I0925 12:30:54.665251    6292 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:30:54.665264    6292 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:30:54.665293    6292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/calico-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/calico-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/calico-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:db:06:c3:5b:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/calico-811000/disk.qcow2
	I0925 12:30:54.667138    6292 main.go:141] libmachine: STDOUT: 
	I0925 12:30:54.667157    6292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:30:54.667182    6292 client.go:171] duration metric: took 274.298958ms to LocalClient.Create
	I0925 12:30:56.670770    6292 start.go:128] duration metric: took 2.302091375s to createHost
	I0925 12:30:56.670859    6292 start.go:83] releasing machines lock for "calico-811000", held for 2.302242125s
	W0925 12:30:56.670922    6292 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:30:56.682288    6292 out.go:177] * Deleting "calico-811000" in qemu2 ...
	W0925 12:30:56.718103    6292 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:30:56.718132    6292 start.go:729] Will try again in 5 seconds ...
	I0925 12:31:01.723052    6292 start.go:360] acquireMachinesLock for calico-811000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:31:01.723647    6292 start.go:364] duration metric: took 496.542µs to acquireMachinesLock for "calico-811000"
	I0925 12:31:01.723782    6292 start.go:93] Provisioning new machine with config: &{Name:calico-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:31:01.724097    6292 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:31:01.731759    6292 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 12:31:01.783961    6292 start.go:159] libmachine.API.Create for "calico-811000" (driver="qemu2")
	I0925 12:31:01.784011    6292 client.go:168] LocalClient.Create starting
	I0925 12:31:01.784122    6292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:31:01.784196    6292 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:01.784217    6292 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:01.784287    6292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:31:01.784341    6292 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:01.784354    6292 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:01.784880    6292 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:31:01.958029    6292 main.go:141] libmachine: Creating SSH key...
	I0925 12:31:01.984954    6292 main.go:141] libmachine: Creating Disk image...
	I0925 12:31:01.984960    6292 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:31:01.985146    6292 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/calico-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/calico-811000/disk.qcow2
	I0925 12:31:01.994547    6292 main.go:141] libmachine: STDOUT: 
	I0925 12:31:01.994565    6292 main.go:141] libmachine: STDERR: 
	I0925 12:31:01.994637    6292 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/calico-811000/disk.qcow2 +20000M
	I0925 12:31:02.002495    6292 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:31:02.002511    6292 main.go:141] libmachine: STDERR: 
	I0925 12:31:02.002522    6292 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/calico-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/calico-811000/disk.qcow2
	I0925 12:31:02.002526    6292 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:31:02.002543    6292 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:31:02.002580    6292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/calico-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/calico-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/calico-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:88:32:a3:00:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/calico-811000/disk.qcow2
	I0925 12:31:02.004281    6292 main.go:141] libmachine: STDOUT: 
	I0925 12:31:02.004295    6292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:31:02.004309    6292 client.go:171] duration metric: took 220.190542ms to LocalClient.Create
	I0925 12:31:04.007397    6292 start.go:128] duration metric: took 2.282247208s to createHost
	I0925 12:31:04.007478    6292 start.go:83] releasing machines lock for "calico-811000", held for 2.282824584s
	W0925 12:31:04.007859    6292 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:04.022424    6292 out.go:201] 
	W0925 12:31:04.027669    6292 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:31:04.027705    6292 out.go:270] * 
	* 
	W0925 12:31:04.030495    6292 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:31:04.040522    6292 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.81739875s)

                                                
                                                
-- stdout --
	* [false-811000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-811000" primary control-plane node in "false-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:31:06.450942    6412 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:31:06.451072    6412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:06.451076    6412 out.go:358] Setting ErrFile to fd 2...
	I0925 12:31:06.451078    6412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:06.451232    6412 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:31:06.452314    6412 out.go:352] Setting JSON to false
	I0925 12:31:06.468533    6412 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5437,"bootTime":1727287229,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:31:06.468628    6412 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:31:06.474366    6412 out.go:177] * [false-811000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:31:06.482126    6412 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:31:06.482161    6412 notify.go:220] Checking for updates...
	I0925 12:31:06.488111    6412 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:31:06.491168    6412 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:31:06.494215    6412 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:31:06.497134    6412 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:31:06.500155    6412 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:31:06.503721    6412 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:31:06.503797    6412 config.go:182] Loaded profile config "stopped-upgrade-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:31:06.503853    6412 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:31:06.508143    6412 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:31:06.515239    6412 start.go:297] selected driver: qemu2
	I0925 12:31:06.515252    6412 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:31:06.515264    6412 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:31:06.517560    6412 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:31:06.520155    6412 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:31:06.523227    6412 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:31:06.523248    6412 cni.go:84] Creating CNI manager for "false"
	I0925 12:31:06.523274    6412 start.go:340] cluster config:
	{Name:false-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:31:06.526946    6412 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:06.531214    6412 out.go:177] * Starting "false-811000" primary control-plane node in "false-811000" cluster
	I0925 12:31:06.539171    6412 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:31:06.539186    6412 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:31:06.539191    6412 cache.go:56] Caching tarball of preloaded images
	I0925 12:31:06.539240    6412 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:31:06.539245    6412 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:31:06.539295    6412 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/false-811000/config.json ...
	I0925 12:31:06.539305    6412 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/false-811000/config.json: {Name:mk3dbaca91e246fa14d01009f8d7cecc8de4e2c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:31:06.539507    6412 start.go:360] acquireMachinesLock for false-811000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:31:06.539540    6412 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "false-811000"
	I0925 12:31:06.539552    6412 start.go:93] Provisioning new machine with config: &{Name:false-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:31:06.539585    6412 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:31:06.547143    6412 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 12:31:06.562375    6412 start.go:159] libmachine.API.Create for "false-811000" (driver="qemu2")
	I0925 12:31:06.562412    6412 client.go:168] LocalClient.Create starting
	I0925 12:31:06.562471    6412 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:31:06.562499    6412 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:06.562509    6412 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:06.562549    6412 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:31:06.562572    6412 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:06.562582    6412 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:06.562916    6412 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:31:06.722783    6412 main.go:141] libmachine: Creating SSH key...
	I0925 12:31:06.812462    6412 main.go:141] libmachine: Creating Disk image...
	I0925 12:31:06.812470    6412 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:31:06.812679    6412 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/false-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/false-811000/disk.qcow2
	I0925 12:31:06.822246    6412 main.go:141] libmachine: STDOUT: 
	I0925 12:31:06.822268    6412 main.go:141] libmachine: STDERR: 
	I0925 12:31:06.822335    6412 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/false-811000/disk.qcow2 +20000M
	I0925 12:31:06.830400    6412 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:31:06.830413    6412 main.go:141] libmachine: STDERR: 
	I0925 12:31:06.830428    6412 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/false-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/false-811000/disk.qcow2
	I0925 12:31:06.830443    6412 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:31:06.830457    6412 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:31:06.830481    6412 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/false-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/false-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/false-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:0e:ab:2f:9f:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/false-811000/disk.qcow2
	I0925 12:31:06.832113    6412 main.go:141] libmachine: STDOUT: 
	I0925 12:31:06.832129    6412 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:31:06.832149    6412 client.go:171] duration metric: took 269.642958ms to LocalClient.Create
	I0925 12:31:08.834746    6412 start.go:128] duration metric: took 2.294433916s to createHost
	I0925 12:31:08.834786    6412 start.go:83] releasing machines lock for "false-811000", held for 2.294525458s
	W0925 12:31:08.834814    6412 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:08.847764    6412 out.go:177] * Deleting "false-811000" in qemu2 ...
	W0925 12:31:08.872166    6412 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:08.872180    6412 start.go:729] Will try again in 5 seconds ...
	I0925 12:31:13.873746    6412 start.go:360] acquireMachinesLock for false-811000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:31:13.874063    6412 start.go:364] duration metric: took 238.208µs to acquireMachinesLock for "false-811000"
	I0925 12:31:13.874100    6412 start.go:93] Provisioning new machine with config: &{Name:false-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:31:13.874246    6412 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:31:13.886711    6412 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 12:31:13.923843    6412 start.go:159] libmachine.API.Create for "false-811000" (driver="qemu2")
	I0925 12:31:13.923888    6412 client.go:168] LocalClient.Create starting
	I0925 12:31:13.923997    6412 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:31:13.924054    6412 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:13.924070    6412 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:13.924143    6412 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:31:13.924190    6412 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:13.924208    6412 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:13.924645    6412 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:31:14.097932    6412 main.go:141] libmachine: Creating SSH key...
	I0925 12:31:14.182586    6412 main.go:141] libmachine: Creating Disk image...
	I0925 12:31:14.182595    6412 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:31:14.182800    6412 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/false-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/false-811000/disk.qcow2
	I0925 12:31:14.191885    6412 main.go:141] libmachine: STDOUT: 
	I0925 12:31:14.191904    6412 main.go:141] libmachine: STDERR: 
	I0925 12:31:14.191973    6412 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/false-811000/disk.qcow2 +20000M
	I0925 12:31:14.199946    6412 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:31:14.199966    6412 main.go:141] libmachine: STDERR: 
	I0925 12:31:14.199982    6412 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/false-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/false-811000/disk.qcow2
	I0925 12:31:14.199987    6412 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:31:14.199997    6412 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:31:14.200030    6412 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/false-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/false-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/false-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:fe:53:8e:7e:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/false-811000/disk.qcow2
	I0925 12:31:14.201641    6412 main.go:141] libmachine: STDOUT: 
	I0925 12:31:14.201658    6412 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:31:14.201673    6412 client.go:171] duration metric: took 277.723416ms to LocalClient.Create
	I0925 12:31:16.204264    6412 start.go:128] duration metric: took 2.329523s to createHost
	I0925 12:31:16.204388    6412 start.go:83] releasing machines lock for "false-811000", held for 2.329863542s
	W0925 12:31:16.204759    6412 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:16.213485    6412 out.go:201] 
	W0925 12:31:16.217441    6412 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:31:16.217464    6412 out.go:270] * 
	* 
	W0925 12:31:16.219324    6412 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:31:16.228420    6412 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-473000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-473000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.766944666s)

                                                
                                                
-- stdout --
	* [old-k8s-version-473000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-473000" primary control-plane node in "old-k8s-version-473000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-473000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:31:18.427805    6526 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:31:18.427943    6526 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:18.427952    6526 out.go:358] Setting ErrFile to fd 2...
	I0925 12:31:18.427955    6526 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:18.428109    6526 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:31:18.429239    6526 out.go:352] Setting JSON to false
	I0925 12:31:18.445601    6526 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5449,"bootTime":1727287229,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:31:18.445681    6526 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:31:18.451809    6526 out.go:177] * [old-k8s-version-473000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:31:18.459652    6526 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:31:18.459689    6526 notify.go:220] Checking for updates...
	I0925 12:31:18.465715    6526 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:31:18.468645    6526 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:31:18.471685    6526 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:31:18.474707    6526 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:31:18.477693    6526 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:31:18.480983    6526 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:31:18.481048    6526 config.go:182] Loaded profile config "stopped-upgrade-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:31:18.481091    6526 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:31:18.485678    6526 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:31:18.492686    6526 start.go:297] selected driver: qemu2
	I0925 12:31:18.492694    6526 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:31:18.492702    6526 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:31:18.495117    6526 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:31:18.497624    6526 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:31:18.500645    6526 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:31:18.500666    6526 cni.go:84] Creating CNI manager for ""
	I0925 12:31:18.500690    6526 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 12:31:18.500711    6526 start.go:340] cluster config:
	{Name:old-k8s-version-473000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-473000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:31:18.504433    6526 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:18.511691    6526 out.go:177] * Starting "old-k8s-version-473000" primary control-plane node in "old-k8s-version-473000" cluster
	I0925 12:31:18.515635    6526 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0925 12:31:18.515654    6526 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0925 12:31:18.515660    6526 cache.go:56] Caching tarball of preloaded images
	I0925 12:31:18.515711    6526 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:31:18.515716    6526 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0925 12:31:18.515760    6526 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/old-k8s-version-473000/config.json ...
	I0925 12:31:18.515770    6526 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/old-k8s-version-473000/config.json: {Name:mk7b630b597c0c7303c024ee501d406ad6257a8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:31:18.515974    6526 start.go:360] acquireMachinesLock for old-k8s-version-473000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:31:18.516004    6526 start.go:364] duration metric: took 24.042µs to acquireMachinesLock for "old-k8s-version-473000"
	I0925 12:31:18.516016    6526 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-473000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-473000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:31:18.516044    6526 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:31:18.524671    6526 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 12:31:18.539956    6526 start.go:159] libmachine.API.Create for "old-k8s-version-473000" (driver="qemu2")
	I0925 12:31:18.539991    6526 client.go:168] LocalClient.Create starting
	I0925 12:31:18.540059    6526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:31:18.540097    6526 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:18.540106    6526 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:18.540143    6526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:31:18.540166    6526 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:18.540175    6526 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:18.540532    6526 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:31:18.702577    6526 main.go:141] libmachine: Creating SSH key...
	I0925 12:31:18.738082    6526 main.go:141] libmachine: Creating Disk image...
	I0925 12:31:18.738090    6526 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:31:18.738275    6526 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/disk.qcow2
	I0925 12:31:18.747351    6526 main.go:141] libmachine: STDOUT: 
	I0925 12:31:18.747371    6526 main.go:141] libmachine: STDERR: 
	I0925 12:31:18.747428    6526 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/disk.qcow2 +20000M
	I0925 12:31:18.755575    6526 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:31:18.755591    6526 main.go:141] libmachine: STDERR: 
	I0925 12:31:18.755617    6526 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/disk.qcow2
	I0925 12:31:18.755622    6526 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:31:18.755637    6526 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:31:18.755665    6526 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:d1:1e:e0:c4:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/disk.qcow2
	I0925 12:31:18.757352    6526 main.go:141] libmachine: STDOUT: 
	I0925 12:31:18.757367    6526 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:31:18.757386    6526 client.go:171] duration metric: took 217.354958ms to LocalClient.Create
	I0925 12:31:20.759996    6526 start.go:128] duration metric: took 2.243609667s to createHost
	I0925 12:31:20.760077    6526 start.go:83] releasing machines lock for "old-k8s-version-473000", held for 2.243753333s
	W0925 12:31:20.760148    6526 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:20.777619    6526 out.go:177] * Deleting "old-k8s-version-473000" in qemu2 ...
	W0925 12:31:20.808850    6526 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:20.808875    6526 start.go:729] Will try again in 5 seconds ...
	I0925 12:31:25.811544    6526 start.go:360] acquireMachinesLock for old-k8s-version-473000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:31:25.811822    6526 start.go:364] duration metric: took 239.125µs to acquireMachinesLock for "old-k8s-version-473000"
	I0925 12:31:25.811857    6526 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-473000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-473000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:31:25.812007    6526 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:31:25.821302    6526 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 12:31:25.857297    6526 start.go:159] libmachine.API.Create for "old-k8s-version-473000" (driver="qemu2")
	I0925 12:31:25.857346    6526 client.go:168] LocalClient.Create starting
	I0925 12:31:25.857447    6526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:31:25.857510    6526 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:25.857525    6526 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:25.857591    6526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:31:25.857630    6526 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:25.857642    6526 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:25.858181    6526 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:31:26.025588    6526 main.go:141] libmachine: Creating SSH key...
	I0925 12:31:26.095153    6526 main.go:141] libmachine: Creating Disk image...
	I0925 12:31:26.095166    6526 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:31:26.095379    6526 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/disk.qcow2
	I0925 12:31:26.104801    6526 main.go:141] libmachine: STDOUT: 
	I0925 12:31:26.104824    6526 main.go:141] libmachine: STDERR: 
	I0925 12:31:26.104879    6526 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/disk.qcow2 +20000M
	I0925 12:31:26.112886    6526 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:31:26.112902    6526 main.go:141] libmachine: STDERR: 
	I0925 12:31:26.112913    6526 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/disk.qcow2
	I0925 12:31:26.112918    6526 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:31:26.112927    6526 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:31:26.112958    6526 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:48:e2:80:76:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/disk.qcow2
	I0925 12:31:26.114542    6526 main.go:141] libmachine: STDOUT: 
	I0925 12:31:26.114559    6526 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:31:26.114573    6526 client.go:171] duration metric: took 257.198875ms to LocalClient.Create
	I0925 12:31:28.117019    6526 start.go:128] duration metric: took 2.304788667s to createHost
	I0925 12:31:28.117071    6526 start.go:83] releasing machines lock for "old-k8s-version-473000", held for 2.305044209s
	W0925 12:31:28.117323    6526 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-473000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-473000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:28.136028    6526 out.go:201] 
	W0925 12:31:28.138882    6526 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:31:28.138906    6526 out.go:270] * 
	* 
	W0925 12:31:28.141310    6526 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:31:28.154915    6526 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-473000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000: exit status 7 (65.529916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-473000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-473000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-473000 create -f testdata/busybox.yaml: exit status 1 (30.24975ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-473000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-473000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000: exit status 7 (29.8255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-473000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000: exit status 7 (29.683333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-473000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-473000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-473000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-473000 describe deploy/metrics-server -n kube-system: exit status 1 (26.772917ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-473000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-473000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000: exit status 7 (29.486416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-473000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-473000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-473000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.193526125s)

                                                
                                                
-- stdout --
	* [old-k8s-version-473000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-473000" primary control-plane node in "old-k8s-version-473000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-473000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-473000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:31:32.205093    6578 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:31:32.205249    6578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:32.205254    6578 out.go:358] Setting ErrFile to fd 2...
	I0925 12:31:32.205257    6578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:32.205394    6578 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:31:32.206690    6578 out.go:352] Setting JSON to false
	I0925 12:31:32.223453    6578 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5463,"bootTime":1727287229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:31:32.223523    6578 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:31:32.228415    6578 out.go:177] * [old-k8s-version-473000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:31:32.236420    6578 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:31:32.236502    6578 notify.go:220] Checking for updates...
	I0925 12:31:32.244470    6578 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:31:32.247399    6578 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:31:32.250452    6578 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:31:32.253467    6578 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:31:32.256440    6578 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:31:32.259652    6578 config.go:182] Loaded profile config "old-k8s-version-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0925 12:31:32.263387    6578 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0925 12:31:32.266444    6578 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:31:32.270475    6578 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 12:31:32.278423    6578 start.go:297] selected driver: qemu2
	I0925 12:31:32.278430    6578 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-473000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-473000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:31:32.278473    6578 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:31:32.280874    6578 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:31:32.280911    6578 cni.go:84] Creating CNI manager for ""
	I0925 12:31:32.280933    6578 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 12:31:32.280968    6578 start.go:340] cluster config:
	{Name:old-k8s-version-473000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-473000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:31:32.284497    6578 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:32.292386    6578 out.go:177] * Starting "old-k8s-version-473000" primary control-plane node in "old-k8s-version-473000" cluster
	I0925 12:31:32.296563    6578 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0925 12:31:32.296576    6578 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0925 12:31:32.296583    6578 cache.go:56] Caching tarball of preloaded images
	I0925 12:31:32.296640    6578 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:31:32.296646    6578 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0925 12:31:32.296697    6578 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/old-k8s-version-473000/config.json ...
	I0925 12:31:32.297169    6578 start.go:360] acquireMachinesLock for old-k8s-version-473000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:31:32.297198    6578 start.go:364] duration metric: took 23.417µs to acquireMachinesLock for "old-k8s-version-473000"
	I0925 12:31:32.297209    6578 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:31:32.297215    6578 fix.go:54] fixHost starting: 
	I0925 12:31:32.297342    6578 fix.go:112] recreateIfNeeded on old-k8s-version-473000: state=Stopped err=<nil>
	W0925 12:31:32.297350    6578 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:31:32.300398    6578 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-473000" ...
	I0925 12:31:32.308299    6578 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:31:32.308334    6578 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:48:e2:80:76:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/disk.qcow2
	I0925 12:31:32.310153    6578 main.go:141] libmachine: STDOUT: 
	I0925 12:31:32.310171    6578 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:31:32.310212    6578 fix.go:56] duration metric: took 12.99575ms for fixHost
	I0925 12:31:32.310217    6578 start.go:83] releasing machines lock for "old-k8s-version-473000", held for 13.013125ms
	W0925 12:31:32.310224    6578 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:31:32.310258    6578 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:32.310263    6578 start.go:729] Will try again in 5 seconds ...
	I0925 12:31:37.312717    6578 start.go:360] acquireMachinesLock for old-k8s-version-473000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:31:37.313157    6578 start.go:364] duration metric: took 355.667µs to acquireMachinesLock for "old-k8s-version-473000"
	I0925 12:31:37.313281    6578 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:31:37.313297    6578 fix.go:54] fixHost starting: 
	I0925 12:31:37.313909    6578 fix.go:112] recreateIfNeeded on old-k8s-version-473000: state=Stopped err=<nil>
	W0925 12:31:37.313929    6578 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:31:37.322358    6578 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-473000" ...
	I0925 12:31:37.326305    6578 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:31:37.326490    6578 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:48:e2:80:76:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/old-k8s-version-473000/disk.qcow2
	I0925 12:31:37.333377    6578 main.go:141] libmachine: STDOUT: 
	I0925 12:31:37.333419    6578 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:31:37.333487    6578 fix.go:56] duration metric: took 20.191959ms for fixHost
	I0925 12:31:37.333503    6578 start.go:83] releasing machines lock for "old-k8s-version-473000", held for 20.3285ms
	W0925 12:31:37.333643    6578 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-473000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-473000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:37.341346    6578 out.go:201] 
	W0925 12:31:37.345530    6578 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:31:37.345551    6578 out.go:270] * 
	* 
	W0925 12:31:37.346876    6578 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:31:37.353345    6578 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-473000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000: exit status 7 (55.5305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-473000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-473000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000: exit status 7 (32.240042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-473000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-473000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-473000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-473000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.928959ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-473000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-473000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000: exit status 7 (30.157ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-473000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-473000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000: exit status 7 (29.625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-473000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-473000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-473000 --alsologtostderr -v=1: exit status 83 (40.784833ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-473000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-473000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:31:37.612837    6597 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:31:37.613708    6597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:37.613712    6597 out.go:358] Setting ErrFile to fd 2...
	I0925 12:31:37.613715    6597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:37.613877    6597 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:31:37.614080    6597 out.go:352] Setting JSON to false
	I0925 12:31:37.614089    6597 mustload.go:65] Loading cluster: old-k8s-version-473000
	I0925 12:31:37.614319    6597 config.go:182] Loaded profile config "old-k8s-version-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0925 12:31:37.618329    6597 out.go:177] * The control-plane node old-k8s-version-473000 host is not running: state=Stopped
	I0925 12:31:37.621130    6597 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-473000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-473000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000: exit status 7 (28.950042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-473000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000: exit status 7 (28.538875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-473000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-690000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-690000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.805789459s)

                                                
                                                
-- stdout --
	* [no-preload-690000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-690000" primary control-plane node in "no-preload-690000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-690000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:31:37.938050    6614 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:31:37.938197    6614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:37.938200    6614 out.go:358] Setting ErrFile to fd 2...
	I0925 12:31:37.938202    6614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:37.938336    6614 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:31:37.939457    6614 out.go:352] Setting JSON to false
	I0925 12:31:37.956120    6614 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5468,"bootTime":1727287229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:31:37.956203    6614 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:31:37.960748    6614 out.go:177] * [no-preload-690000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:31:37.966710    6614 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:31:37.966744    6614 notify.go:220] Checking for updates...
	I0925 12:31:37.973655    6614 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:31:37.976702    6614 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:31:37.979680    6614 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:31:37.982659    6614 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:31:37.985696    6614 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:31:37.989088    6614 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:31:37.989159    6614 config.go:182] Loaded profile config "stopped-upgrade-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0925 12:31:37.989212    6614 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:31:37.993603    6614 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:31:38.000710    6614 start.go:297] selected driver: qemu2
	I0925 12:31:38.000717    6614 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:31:38.000722    6614 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:31:38.002981    6614 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:31:38.005665    6614 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:31:38.008724    6614 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:31:38.008739    6614 cni.go:84] Creating CNI manager for ""
	I0925 12:31:38.008763    6614 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:31:38.008772    6614 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 12:31:38.008799    6614 start.go:340] cluster config:
	{Name:no-preload-690000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-690000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:31:38.012130    6614 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:38.017717    6614 out.go:177] * Starting "no-preload-690000" primary control-plane node in "no-preload-690000" cluster
	I0925 12:31:38.021667    6614 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:31:38.021720    6614 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/no-preload-690000/config.json ...
	I0925 12:31:38.021735    6614 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/no-preload-690000/config.json: {Name:mk51bc6b43aedba5b8cd09c4814550724a8daea9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:31:38.021730    6614 cache.go:107] acquiring lock: {Name:mk16675d259b1478b2c38de6eb06f168f308841a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:38.021731    6614 cache.go:107] acquiring lock: {Name:mk273e4e461f6b0311e73b06070cb24e4edfcf62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:38.021731    6614 cache.go:107] acquiring lock: {Name:mkbf8ab434d1f319e97aa174e06d503e94bb559e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:38.021744    6614 cache.go:107] acquiring lock: {Name:mk939227b676be566004a78016d573d50caa76c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:38.021745    6614 cache.go:107] acquiring lock: {Name:mke605bd49520e2df321060290e4860d6475576b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:38.021757    6614 cache.go:107] acquiring lock: {Name:mk9c35257494383c75b26019b5c1da004cc1a4da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:38.021766    6614 cache.go:107] acquiring lock: {Name:mk9fac3ef6f5981e11069a5683e914c09cc88168 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:38.021858    6614 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0925 12:31:38.021871    6614 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0925 12:31:38.021899    6614 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0925 12:31:38.021904    6614 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0925 12:31:38.021991    6614 cache.go:107] acquiring lock: {Name:mk13546fe2f47c59eecc9cacb71890f90577a747 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:38.022006    6614 cache.go:115] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0925 12:31:38.022013    6614 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 284.75µs
	I0925 12:31:38.022018    6614 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0925 12:31:38.022191    6614 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0925 12:31:38.022207    6614 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0925 12:31:38.022230    6614 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0925 12:31:38.022278    6614 start.go:360] acquireMachinesLock for no-preload-690000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:31:38.022310    6614 start.go:364] duration metric: took 25.375µs to acquireMachinesLock for "no-preload-690000"
	I0925 12:31:38.022325    6614 start.go:93] Provisioning new machine with config: &{Name:no-preload-690000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-690000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:31:38.022355    6614 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:31:38.025683    6614 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 12:31:38.028939    6614 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0925 12:31:38.028985    6614 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0925 12:31:38.029010    6614 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0925 12:31:38.029102    6614 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0925 12:31:38.029101    6614 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0925 12:31:38.029127    6614 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0925 12:31:38.029138    6614 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0925 12:31:38.042196    6614 start.go:159] libmachine.API.Create for "no-preload-690000" (driver="qemu2")
	I0925 12:31:38.042242    6614 client.go:168] LocalClient.Create starting
	I0925 12:31:38.042344    6614 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:31:38.042385    6614 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:38.042393    6614 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:38.042428    6614 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:31:38.042451    6614 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:38.042458    6614 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:38.042827    6614 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:31:38.210246    6614 main.go:141] libmachine: Creating SSH key...
	I0925 12:31:38.273135    6614 main.go:141] libmachine: Creating Disk image...
	I0925 12:31:38.273159    6614 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:31:38.273414    6614 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/disk.qcow2
	I0925 12:31:38.283499    6614 main.go:141] libmachine: STDOUT: 
	I0925 12:31:38.283521    6614 main.go:141] libmachine: STDERR: 
	I0925 12:31:38.283598    6614 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/disk.qcow2 +20000M
	I0925 12:31:38.292696    6614 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:31:38.292716    6614 main.go:141] libmachine: STDERR: 
	I0925 12:31:38.292742    6614 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/disk.qcow2
	I0925 12:31:38.292748    6614 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:31:38.292761    6614 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:31:38.292793    6614 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:af:ee:ed:4c:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/disk.qcow2
	I0925 12:31:38.294861    6614 main.go:141] libmachine: STDOUT: 
	I0925 12:31:38.294876    6614 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:31:38.294898    6614 client.go:171] duration metric: took 252.639583ms to LocalClient.Create
	I0925 12:31:38.454319    6614 cache.go:162] opening:  /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0925 12:31:38.455763    6614 cache.go:162] opening:  /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0925 12:31:38.459324    6614 cache.go:162] opening:  /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0925 12:31:38.483971    6614 cache.go:162] opening:  /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0925 12:31:38.502002    6614 cache.go:162] opening:  /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0925 12:31:38.503817    6614 cache.go:162] opening:  /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0925 12:31:38.540591    6614 cache.go:162] opening:  /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0925 12:31:38.594673    6614 cache.go:157] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0925 12:31:38.594690    6614 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 572.911209ms
	I0925 12:31:38.594726    6614 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0925 12:31:40.296419    6614 start.go:128] duration metric: took 2.273981583s to createHost
	I0925 12:31:40.296431    6614 start.go:83] releasing machines lock for "no-preload-690000", held for 2.274041708s
	W0925 12:31:40.296442    6614 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:40.309496    6614 out.go:177] * Deleting "no-preload-690000" in qemu2 ...
	W0925 12:31:40.323401    6614 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:40.323415    6614 start.go:729] Will try again in 5 seconds ...
	I0925 12:31:40.798575    6614 cache.go:157] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0925 12:31:40.798589    6614 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 2.776757708s
	I0925 12:31:40.798598    6614 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0925 12:31:41.471443    6614 cache.go:157] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0925 12:31:41.471538    6614 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 3.449702375s
	I0925 12:31:41.471570    6614 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0925 12:31:42.158686    6614 cache.go:157] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0925 12:31:42.158776    6614 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 4.136653792s
	I0925 12:31:42.158807    6614 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0925 12:31:42.464644    6614 cache.go:157] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0925 12:31:42.464695    6614 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 4.442816084s
	I0925 12:31:42.464725    6614 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0925 12:31:42.828973    6614 cache.go:157] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0925 12:31:42.829014    6614 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 4.807143458s
	I0925 12:31:42.829041    6614 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0925 12:31:45.324179    6614 start.go:360] acquireMachinesLock for no-preload-690000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:31:45.324670    6614 start.go:364] duration metric: took 416.417µs to acquireMachinesLock for "no-preload-690000"
	I0925 12:31:45.324795    6614 start.go:93] Provisioning new machine with config: &{Name:no-preload-690000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-690000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:31:45.325002    6614 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:31:45.330154    6614 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 12:31:45.383452    6614 start.go:159] libmachine.API.Create for "no-preload-690000" (driver="qemu2")
	I0925 12:31:45.383503    6614 client.go:168] LocalClient.Create starting
	I0925 12:31:45.383636    6614 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:31:45.383712    6614 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:45.383737    6614 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:45.383816    6614 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:31:45.383860    6614 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:45.383881    6614 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:45.384378    6614 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:31:45.555935    6614 main.go:141] libmachine: Creating SSH key...
	I0925 12:31:45.640496    6614 main.go:141] libmachine: Creating Disk image...
	I0925 12:31:45.640502    6614 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:31:45.640687    6614 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/disk.qcow2
	I0925 12:31:45.650044    6614 main.go:141] libmachine: STDOUT: 
	I0925 12:31:45.650061    6614 main.go:141] libmachine: STDERR: 
	I0925 12:31:45.650124    6614 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/disk.qcow2 +20000M
	I0925 12:31:45.658112    6614 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:31:45.658130    6614 main.go:141] libmachine: STDERR: 
	I0925 12:31:45.658141    6614 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/disk.qcow2
	I0925 12:31:45.658147    6614 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:31:45.658155    6614 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:31:45.658215    6614 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:4a:ff:20:a3:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/disk.qcow2
	I0925 12:31:45.660006    6614 main.go:141] libmachine: STDOUT: 
	I0925 12:31:45.660041    6614 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:31:45.660055    6614 client.go:171] duration metric: took 276.54025ms to LocalClient.Create
	I0925 12:31:46.736567    6614 cache.go:157] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0925 12:31:46.736638    6614 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.714651958s
	I0925 12:31:46.736668    6614 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0925 12:31:46.736711    6614 cache.go:87] Successfully saved all images to host disk.
	I0925 12:31:47.661161    6614 start.go:128] duration metric: took 2.336022208s to createHost
	I0925 12:31:47.661212    6614 start.go:83] releasing machines lock for "no-preload-690000", held for 2.336482834s
	W0925 12:31:47.661553    6614 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-690000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-690000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:47.681136    6614 out.go:201] 
	W0925 12:31:47.684134    6614 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:31:47.684161    6614 out.go:270] * 
	* 
	W0925 12:31:47.686602    6614 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:31:47.701027    6614 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-690000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000: exit status 7 (66.79725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-690000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-404000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-404000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.844627958s)

                                                
                                                
-- stdout --
	* [embed-certs-404000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-404000" primary control-plane node in "embed-certs-404000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-404000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:31:40.348762    6655 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:31:40.348884    6655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:40.348888    6655 out.go:358] Setting ErrFile to fd 2...
	I0925 12:31:40.348890    6655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:40.349017    6655 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:31:40.350127    6655 out.go:352] Setting JSON to false
	I0925 12:31:40.366364    6655 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5471,"bootTime":1727287229,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:31:40.366435    6655 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:31:40.371413    6655 out.go:177] * [embed-certs-404000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:31:40.386586    6655 notify.go:220] Checking for updates...
	I0925 12:31:40.390407    6655 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:31:40.393301    6655 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:31:40.401399    6655 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:31:40.409363    6655 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:31:40.417365    6655 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:31:40.425428    6655 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:31:40.429747    6655 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:31:40.429821    6655 config.go:182] Loaded profile config "no-preload-690000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:31:40.429874    6655 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:31:40.433405    6655 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:31:40.440404    6655 start.go:297] selected driver: qemu2
	I0925 12:31:40.440410    6655 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:31:40.440417    6655 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:31:40.442764    6655 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:31:40.446368    6655 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:31:40.450346    6655 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:31:40.450365    6655 cni.go:84] Creating CNI manager for ""
	I0925 12:31:40.450389    6655 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:31:40.450394    6655 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 12:31:40.450427    6655 start.go:340] cluster config:
	{Name:embed-certs-404000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-404000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:31:40.454342    6655 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:40.457435    6655 out.go:177] * Starting "embed-certs-404000" primary control-plane node in "embed-certs-404000" cluster
	I0925 12:31:40.465390    6655 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:31:40.465406    6655 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:31:40.465417    6655 cache.go:56] Caching tarball of preloaded images
	I0925 12:31:40.465478    6655 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:31:40.465484    6655 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:31:40.465557    6655 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/embed-certs-404000/config.json ...
	I0925 12:31:40.465568    6655 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/embed-certs-404000/config.json: {Name:mkabfafe379963a1e3aec09dc5afbcd65b6f5012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:31:40.465777    6655 start.go:360] acquireMachinesLock for embed-certs-404000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:31:40.465817    6655 start.go:364] duration metric: took 34.042µs to acquireMachinesLock for "embed-certs-404000"
	I0925 12:31:40.465830    6655 start.go:93] Provisioning new machine with config: &{Name:embed-certs-404000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-404000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:31:40.465861    6655 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:31:40.473350    6655 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 12:31:40.491869    6655 start.go:159] libmachine.API.Create for "embed-certs-404000" (driver="qemu2")
	I0925 12:31:40.491901    6655 client.go:168] LocalClient.Create starting
	I0925 12:31:40.491986    6655 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:31:40.492016    6655 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:40.492026    6655 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:40.492065    6655 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:31:40.492089    6655 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:40.492098    6655 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:40.492448    6655 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:31:40.665183    6655 main.go:141] libmachine: Creating SSH key...
	I0925 12:31:40.688513    6655 main.go:141] libmachine: Creating Disk image...
	I0925 12:31:40.688521    6655 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:31:40.688717    6655 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/disk.qcow2
	I0925 12:31:40.698155    6655 main.go:141] libmachine: STDOUT: 
	I0925 12:31:40.698173    6655 main.go:141] libmachine: STDERR: 
	I0925 12:31:40.698233    6655 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/disk.qcow2 +20000M
	I0925 12:31:40.706272    6655 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:31:40.706286    6655 main.go:141] libmachine: STDERR: 
	I0925 12:31:40.706305    6655 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/disk.qcow2
	I0925 12:31:40.706309    6655 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:31:40.706333    6655 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:31:40.706360    6655 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:9d:c2:c4:82:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/disk.qcow2
	I0925 12:31:40.707986    6655 main.go:141] libmachine: STDOUT: 
	I0925 12:31:40.708000    6655 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:31:40.708019    6655 client.go:171] duration metric: took 216.105125ms to LocalClient.Create
	I0925 12:31:42.710258    6655 start.go:128] duration metric: took 2.244314916s to createHost
	I0925 12:31:42.710333    6655 start.go:83] releasing machines lock for "embed-certs-404000", held for 2.244438833s
	W0925 12:31:42.710453    6655 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:42.719479    6655 out.go:177] * Deleting "embed-certs-404000" in qemu2 ...
	W0925 12:31:42.750381    6655 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:42.750399    6655 start.go:729] Will try again in 5 seconds ...
	I0925 12:31:47.752590    6655 start.go:360] acquireMachinesLock for embed-certs-404000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:31:47.752740    6655 start.go:364] duration metric: took 113.084µs to acquireMachinesLock for "embed-certs-404000"
	I0925 12:31:47.752772    6655 start.go:93] Provisioning new machine with config: &{Name:embed-certs-404000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-404000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:31:47.752882    6655 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:31:47.759953    6655 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 12:31:47.786492    6655 start.go:159] libmachine.API.Create for "embed-certs-404000" (driver="qemu2")
	I0925 12:31:47.786524    6655 client.go:168] LocalClient.Create starting
	I0925 12:31:47.786591    6655 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:31:47.786636    6655 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:47.786648    6655 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:47.786691    6655 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:31:47.786715    6655 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:47.786724    6655 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:47.787070    6655 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:31:48.026563    6655 main.go:141] libmachine: Creating SSH key...
	I0925 12:31:48.102818    6655 main.go:141] libmachine: Creating Disk image...
	I0925 12:31:48.102827    6655 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:31:48.103007    6655 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/disk.qcow2
	I0925 12:31:48.112040    6655 main.go:141] libmachine: STDOUT: 
	I0925 12:31:48.112063    6655 main.go:141] libmachine: STDERR: 
	I0925 12:31:48.112134    6655 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/disk.qcow2 +20000M
	I0925 12:31:48.120132    6655 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:31:48.120147    6655 main.go:141] libmachine: STDERR: 
	I0925 12:31:48.120161    6655 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/disk.qcow2
	I0925 12:31:48.120169    6655 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:31:48.120178    6655 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:31:48.120214    6655 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:42:f6:6b:7b:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/disk.qcow2
	I0925 12:31:48.121746    6655 main.go:141] libmachine: STDOUT: 
	I0925 12:31:48.121768    6655 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:31:48.121780    6655 client.go:171] duration metric: took 335.2465ms to LocalClient.Create
	I0925 12:31:50.123993    6655 start.go:128] duration metric: took 2.371058625s to createHost
	I0925 12:31:50.124062    6655 start.go:83] releasing machines lock for "embed-certs-404000", held for 2.371277791s
	W0925 12:31:50.124534    6655 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-404000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-404000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:50.134298    6655 out.go:201] 
	W0925 12:31:50.137480    6655 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:31:50.137520    6655 out.go:270] * 
	* 
	W0925 12:31:50.140341    6655 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:31:50.150287    6655 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-404000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000: exit status 7 (64.911083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-690000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-690000 create -f testdata/busybox.yaml: exit status 1 (32.624959ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-690000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-690000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000: exit status 7 (31.436417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-690000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000: exit status 7 (33.181834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-690000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-690000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-690000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-690000 describe deploy/metrics-server -n kube-system: exit status 1 (29.346625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-690000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-690000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000: exit status 7 (33.262416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-690000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-404000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-404000 create -f testdata/busybox.yaml: exit status 1 (29.524084ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-404000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-404000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000: exit status 7 (28.915625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-404000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000: exit status 7 (29.163584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-404000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-404000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-404000 describe deploy/metrics-server -n kube-system: exit status 1 (26.717ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-404000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-404000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000: exit status 7 (29.686041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-690000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-690000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.200572917s)

                                                
                                                
-- stdout --
	* [no-preload-690000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-690000" primary control-plane node in "no-preload-690000" cluster
	* Restarting existing qemu2 VM for "no-preload-690000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-690000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:31:51.367373    6726 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:31:51.367495    6726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:51.367498    6726 out.go:358] Setting ErrFile to fd 2...
	I0925 12:31:51.367501    6726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:51.367636    6726 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:31:51.368608    6726 out.go:352] Setting JSON to false
	I0925 12:31:51.384466    6726 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5482,"bootTime":1727287229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:31:51.384562    6726 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:31:51.389585    6726 out.go:177] * [no-preload-690000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:31:51.397561    6726 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:31:51.397618    6726 notify.go:220] Checking for updates...
	I0925 12:31:51.404472    6726 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:31:51.412509    6726 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:31:51.419558    6726 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:31:51.427530    6726 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:31:51.431588    6726 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:31:51.434859    6726 config.go:182] Loaded profile config "no-preload-690000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:31:51.435131    6726 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:31:51.439460    6726 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 12:31:51.446552    6726 start.go:297] selected driver: qemu2
	I0925 12:31:51.446561    6726 start.go:901] validating driver "qemu2" against &{Name:no-preload-690000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-690000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:31:51.446647    6726 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:31:51.448967    6726 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:31:51.448996    6726 cni.go:84] Creating CNI manager for ""
	I0925 12:31:51.449019    6726 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:31:51.449052    6726 start.go:340] cluster config:
	{Name:no-preload-690000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-690000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:31:51.452781    6726 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:51.461605    6726 out.go:177] * Starting "no-preload-690000" primary control-plane node in "no-preload-690000" cluster
	I0925 12:31:51.465579    6726 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:31:51.465665    6726 cache.go:107] acquiring lock: {Name:mke605bd49520e2df321060290e4860d6475576b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:51.465669    6726 cache.go:107] acquiring lock: {Name:mk273e4e461f6b0311e73b06070cb24e4edfcf62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:51.465677    6726 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/no-preload-690000/config.json ...
	I0925 12:31:51.465684    6726 cache.go:107] acquiring lock: {Name:mk16675d259b1478b2c38de6eb06f168f308841a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:51.465728    6726 cache.go:107] acquiring lock: {Name:mk9fac3ef6f5981e11069a5683e914c09cc88168 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:51.465740    6726 cache.go:115] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0925 12:31:51.465732    6726 cache.go:115] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0925 12:31:51.465746    6726 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 81.042µs
	I0925 12:31:51.465751    6726 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 90µs
	I0925 12:31:51.465757    6726 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0925 12:31:51.465757    6726 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0925 12:31:51.465768    6726 cache.go:107] acquiring lock: {Name:mk939227b676be566004a78016d573d50caa76c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:51.465768    6726 cache.go:107] acquiring lock: {Name:mkbf8ab434d1f319e97aa174e06d503e94bb559e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:51.465771    6726 cache.go:107] acquiring lock: {Name:mk9c35257494383c75b26019b5c1da004cc1a4da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:51.465833    6726 cache.go:115] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0925 12:31:51.465840    6726 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 78µs
	I0925 12:31:51.465841    6726 cache.go:115] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0925 12:31:51.465844    6726 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0925 12:31:51.465865    6726 cache.go:115] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0925 12:31:51.465858    6726 cache.go:115] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0925 12:31:51.465870    6726 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 132.334µs
	I0925 12:31:51.465874    6726 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0925 12:31:51.465864    6726 cache.go:107] acquiring lock: {Name:mk13546fe2f47c59eecc9cacb71890f90577a747 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:51.465874    6726 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 202.958µs
	I0925 12:31:51.465883    6726 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0925 12:31:51.465887    6726 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 151.292µs
	I0925 12:31:51.465892    6726 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0925 12:31:51.465936    6726 cache.go:115] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0925 12:31:51.465943    6726 cache.go:115] /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0925 12:31:51.465944    6726 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 136.958µs
	I0925 12:31:51.465947    6726 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 180.292µs
	I0925 12:31:51.465951    6726 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0925 12:31:51.465949    6726 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0925 12:31:51.465955    6726 cache.go:87] Successfully saved all images to host disk.
	I0925 12:31:51.466135    6726 start.go:360] acquireMachinesLock for no-preload-690000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:31:51.466164    6726 start.go:364] duration metric: took 23.791µs to acquireMachinesLock for "no-preload-690000"
	I0925 12:31:51.466175    6726 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:31:51.466179    6726 fix.go:54] fixHost starting: 
	I0925 12:31:51.466301    6726 fix.go:112] recreateIfNeeded on no-preload-690000: state=Stopped err=<nil>
	W0925 12:31:51.466310    6726 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:31:51.474496    6726 out.go:177] * Restarting existing qemu2 VM for "no-preload-690000" ...
	I0925 12:31:51.478508    6726 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:31:51.478555    6726 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:4a:ff:20:a3:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/disk.qcow2
	I0925 12:31:51.480609    6726 main.go:141] libmachine: STDOUT: 
	I0925 12:31:51.480632    6726 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:31:51.480661    6726 fix.go:56] duration metric: took 14.478875ms for fixHost
	I0925 12:31:51.480666    6726 start.go:83] releasing machines lock for "no-preload-690000", held for 14.497417ms
	W0925 12:31:51.480672    6726 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:31:51.480697    6726 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:51.480701    6726 start.go:729] Will try again in 5 seconds ...
	I0925 12:31:56.482919    6726 start.go:360] acquireMachinesLock for no-preload-690000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:31:56.483276    6726 start.go:364] duration metric: took 286.083µs to acquireMachinesLock for "no-preload-690000"
	I0925 12:31:56.483395    6726 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:31:56.483418    6726 fix.go:54] fixHost starting: 
	I0925 12:31:56.484084    6726 fix.go:112] recreateIfNeeded on no-preload-690000: state=Stopped err=<nil>
	W0925 12:31:56.484109    6726 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:31:56.492553    6726 out.go:177] * Restarting existing qemu2 VM for "no-preload-690000" ...
	I0925 12:31:56.496646    6726 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:31:56.496809    6726 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:4a:ff:20:a3:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/no-preload-690000/disk.qcow2
	I0925 12:31:56.505638    6726 main.go:141] libmachine: STDOUT: 
	I0925 12:31:56.505693    6726 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:31:56.505789    6726 fix.go:56] duration metric: took 22.372709ms for fixHost
	I0925 12:31:56.505809    6726 start.go:83] releasing machines lock for "no-preload-690000", held for 22.515917ms
	W0925 12:31:56.505974    6726 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-690000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-690000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:56.513582    6726 out.go:201] 
	W0925 12:31:56.516475    6726 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:31:56.516499    6726 out.go:270] * 
	* 
	W0925 12:31:56.519292    6726 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:31:56.527528    6726 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-690000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000: exit status 7 (66.489333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-690000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-404000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-404000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.252923083s)

                                                
                                                
-- stdout --
	* [embed-certs-404000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-404000" primary control-plane node in "embed-certs-404000" cluster
	* Restarting existing qemu2 VM for "embed-certs-404000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-404000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:31:54.473199    6749 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:31:54.473343    6749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:54.473346    6749 out.go:358] Setting ErrFile to fd 2...
	I0925 12:31:54.473348    6749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:54.473477    6749 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:31:54.474514    6749 out.go:352] Setting JSON to false
	I0925 12:31:54.490482    6749 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5485,"bootTime":1727287229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:31:54.490553    6749 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:31:54.494135    6749 out.go:177] * [embed-certs-404000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:31:54.499995    6749 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:31:54.500044    6749 notify.go:220] Checking for updates...
	I0925 12:31:54.508028    6749 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:31:54.511117    6749 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:31:54.514114    6749 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:31:54.517117    6749 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:31:54.520091    6749 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:31:54.523361    6749 config.go:182] Loaded profile config "embed-certs-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:31:54.523616    6749 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:31:54.528087    6749 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 12:31:54.535061    6749 start.go:297] selected driver: qemu2
	I0925 12:31:54.535069    6749 start.go:901] validating driver "qemu2" against &{Name:embed-certs-404000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-404000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:31:54.535140    6749 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:31:54.537483    6749 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:31:54.537510    6749 cni.go:84] Creating CNI manager for ""
	I0925 12:31:54.537536    6749 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:31:54.537558    6749 start.go:340] cluster config:
	{Name:embed-certs-404000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-404000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:31:54.541152    6749 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:54.549054    6749 out.go:177] * Starting "embed-certs-404000" primary control-plane node in "embed-certs-404000" cluster
	I0925 12:31:54.553052    6749 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:31:54.553065    6749 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:31:54.553073    6749 cache.go:56] Caching tarball of preloaded images
	I0925 12:31:54.553135    6749 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:31:54.553140    6749 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:31:54.553210    6749 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/embed-certs-404000/config.json ...
	I0925 12:31:54.553663    6749 start.go:360] acquireMachinesLock for embed-certs-404000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:31:54.553687    6749 start.go:364] duration metric: took 19.5µs to acquireMachinesLock for "embed-certs-404000"
	I0925 12:31:54.553695    6749 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:31:54.553701    6749 fix.go:54] fixHost starting: 
	I0925 12:31:54.553817    6749 fix.go:112] recreateIfNeeded on embed-certs-404000: state=Stopped err=<nil>
	W0925 12:31:54.553824    6749 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:31:54.562119    6749 out.go:177] * Restarting existing qemu2 VM for "embed-certs-404000" ...
	I0925 12:31:54.566053    6749 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:31:54.566087    6749 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:42:f6:6b:7b:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/disk.qcow2
	I0925 12:31:54.567923    6749 main.go:141] libmachine: STDOUT: 
	I0925 12:31:54.567940    6749 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:31:54.567967    6749 fix.go:56] duration metric: took 14.265917ms for fixHost
	I0925 12:31:54.567972    6749 start.go:83] releasing machines lock for "embed-certs-404000", held for 14.281209ms
	W0925 12:31:54.567978    6749 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:31:54.568007    6749 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:54.568011    6749 start.go:729] Will try again in 5 seconds ...
	I0925 12:31:59.570257    6749 start.go:360] acquireMachinesLock for embed-certs-404000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:31:59.618664    6749 start.go:364] duration metric: took 48.296167ms to acquireMachinesLock for "embed-certs-404000"
	I0925 12:31:59.618809    6749 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:31:59.618831    6749 fix.go:54] fixHost starting: 
	I0925 12:31:59.619546    6749 fix.go:112] recreateIfNeeded on embed-certs-404000: state=Stopped err=<nil>
	W0925 12:31:59.619577    6749 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:31:59.628932    6749 out.go:177] * Restarting existing qemu2 VM for "embed-certs-404000" ...
	I0925 12:31:59.644966    6749 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:31:59.645191    6749 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:42:f6:6b:7b:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/embed-certs-404000/disk.qcow2
	I0925 12:31:59.654233    6749 main.go:141] libmachine: STDOUT: 
	I0925 12:31:59.654285    6749 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:31:59.654373    6749 fix.go:56] duration metric: took 35.542291ms for fixHost
	I0925 12:31:59.654396    6749 start.go:83] releasing machines lock for "embed-certs-404000", held for 35.708ms
	W0925 12:31:59.654579    6749 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-404000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-404000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:59.662870    6749 out.go:201] 
	W0925 12:31:59.667011    6749 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:31:59.667034    6749 out.go:270] * 
	* 
	W0925 12:31:59.669367    6749 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:31:59.680786    6749 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-404000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000: exit status 7 (63.654916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-690000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000: exit status 7 (31.151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-690000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-690000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-690000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-690000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.5805ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-690000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-690000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000: exit status 7 (28.421917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-690000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-690000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000: exit status 7 (29.454917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-690000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-690000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-690000 --alsologtostderr -v=1: exit status 83 (40.010125ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-690000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-690000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:31:56.793436    6768 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:31:56.793575    6768 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:56.793582    6768 out.go:358] Setting ErrFile to fd 2...
	I0925 12:31:56.793584    6768 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:56.793713    6768 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:31:56.793936    6768 out.go:352] Setting JSON to false
	I0925 12:31:56.793944    6768 mustload.go:65] Loading cluster: no-preload-690000
	I0925 12:31:56.794135    6768 config.go:182] Loaded profile config "no-preload-690000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:31:56.798179    6768 out.go:177] * The control-plane node no-preload-690000 host is not running: state=Stopped
	I0925 12:31:56.801177    6768 out.go:177]   To start a cluster, run: "minikube start -p no-preload-690000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-690000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000: exit status 7 (28.462583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-690000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000: exit status 7 (29.327042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-690000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-022000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-022000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.877085625s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-022000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-022000" primary control-plane node in "default-k8s-diff-port-022000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-022000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:31:57.221011    6792 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:31:57.221117    6792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:57.221119    6792 out.go:358] Setting ErrFile to fd 2...
	I0925 12:31:57.221122    6792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:57.221234    6792 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:31:57.222449    6792 out.go:352] Setting JSON to false
	I0925 12:31:57.239230    6792 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5488,"bootTime":1727287229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:31:57.239300    6792 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:31:57.243115    6792 out.go:177] * [default-k8s-diff-port-022000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:31:57.250194    6792 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:31:57.250241    6792 notify.go:220] Checking for updates...
	I0925 12:31:57.257129    6792 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:31:57.260102    6792 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:31:57.263034    6792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:31:57.266119    6792 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:31:57.269138    6792 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:31:57.272471    6792 config.go:182] Loaded profile config "embed-certs-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:31:57.272531    6792 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:31:57.272572    6792 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:31:57.277084    6792 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:31:57.284107    6792 start.go:297] selected driver: qemu2
	I0925 12:31:57.284115    6792 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:31:57.284121    6792 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:31:57.286367    6792 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 12:31:57.289135    6792 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:31:57.292223    6792 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:31:57.292250    6792 cni.go:84] Creating CNI manager for ""
	I0925 12:31:57.292280    6792 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:31:57.292285    6792 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 12:31:57.292320    6792 start.go:340] cluster config:
	{Name:default-k8s-diff-port-022000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:31:57.295859    6792 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:31:57.303088    6792 out.go:177] * Starting "default-k8s-diff-port-022000" primary control-plane node in "default-k8s-diff-port-022000" cluster
	I0925 12:31:57.307128    6792 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:31:57.307142    6792 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:31:57.307147    6792 cache.go:56] Caching tarball of preloaded images
	I0925 12:31:57.307200    6792 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:31:57.307206    6792 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:31:57.307260    6792 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/default-k8s-diff-port-022000/config.json ...
	I0925 12:31:57.307272    6792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/default-k8s-diff-port-022000/config.json: {Name:mkad247481ceb3f667fb614189dbdecfd3b8abcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:31:57.307631    6792 start.go:360] acquireMachinesLock for default-k8s-diff-port-022000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:31:57.307672    6792 start.go:364] duration metric: took 28.5µs to acquireMachinesLock for "default-k8s-diff-port-022000"
	I0925 12:31:57.307685    6792 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:31:57.307708    6792 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:31:57.311147    6792 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 12:31:57.327928    6792 start.go:159] libmachine.API.Create for "default-k8s-diff-port-022000" (driver="qemu2")
	I0925 12:31:57.327960    6792 client.go:168] LocalClient.Create starting
	I0925 12:31:57.328023    6792 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:31:57.328052    6792 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:57.328062    6792 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:57.328096    6792 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:31:57.328118    6792 main.go:141] libmachine: Decoding PEM data...
	I0925 12:31:57.328124    6792 main.go:141] libmachine: Parsing certificate...
	I0925 12:31:57.328551    6792 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:31:57.491475    6792 main.go:141] libmachine: Creating SSH key...
	I0925 12:31:57.597255    6792 main.go:141] libmachine: Creating Disk image...
	I0925 12:31:57.597261    6792 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:31:57.597451    6792 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/disk.qcow2
	I0925 12:31:57.606740    6792 main.go:141] libmachine: STDOUT: 
	I0925 12:31:57.606764    6792 main.go:141] libmachine: STDERR: 
	I0925 12:31:57.606828    6792 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/disk.qcow2 +20000M
	I0925 12:31:57.614616    6792 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:31:57.614629    6792 main.go:141] libmachine: STDERR: 
	I0925 12:31:57.614643    6792 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/disk.qcow2
	I0925 12:31:57.614646    6792 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:31:57.614656    6792 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:31:57.614688    6792 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:f5:26:4d:84:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/disk.qcow2
	I0925 12:31:57.616268    6792 main.go:141] libmachine: STDOUT: 
	I0925 12:31:57.616282    6792 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:31:57.616304    6792 client.go:171] duration metric: took 288.336791ms to LocalClient.Create
	I0925 12:31:59.618479    6792 start.go:128] duration metric: took 2.31074725s to createHost
	I0925 12:31:59.618531    6792 start.go:83] releasing machines lock for "default-k8s-diff-port-022000", held for 2.310846375s
	W0925 12:31:59.618628    6792 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:59.641010    6792 out.go:177] * Deleting "default-k8s-diff-port-022000" in qemu2 ...
	W0925 12:31:59.705290    6792 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:31:59.705388    6792 start.go:729] Will try again in 5 seconds ...
	I0925 12:32:04.707642    6792 start.go:360] acquireMachinesLock for default-k8s-diff-port-022000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:32:04.708130    6792 start.go:364] duration metric: took 375.75µs to acquireMachinesLock for "default-k8s-diff-port-022000"
	I0925 12:32:04.708273    6792 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:32:04.708567    6792 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:32:04.714089    6792 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 12:32:04.764947    6792 start.go:159] libmachine.API.Create for "default-k8s-diff-port-022000" (driver="qemu2")
	I0925 12:32:04.765000    6792 client.go:168] LocalClient.Create starting
	I0925 12:32:04.765106    6792 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:32:04.765176    6792 main.go:141] libmachine: Decoding PEM data...
	I0925 12:32:04.765195    6792 main.go:141] libmachine: Parsing certificate...
	I0925 12:32:04.765253    6792 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:32:04.765297    6792 main.go:141] libmachine: Decoding PEM data...
	I0925 12:32:04.765313    6792 main.go:141] libmachine: Parsing certificate...
	I0925 12:32:04.766084    6792 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:32:04.948977    6792 main.go:141] libmachine: Creating SSH key...
	I0925 12:32:05.001865    6792 main.go:141] libmachine: Creating Disk image...
	I0925 12:32:05.001872    6792 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:32:05.002052    6792 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/disk.qcow2
	I0925 12:32:05.011186    6792 main.go:141] libmachine: STDOUT: 
	I0925 12:32:05.011206    6792 main.go:141] libmachine: STDERR: 
	I0925 12:32:05.011267    6792 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/disk.qcow2 +20000M
	I0925 12:32:05.019156    6792 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:32:05.019183    6792 main.go:141] libmachine: STDERR: 
	I0925 12:32:05.019202    6792 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/disk.qcow2
	I0925 12:32:05.019208    6792 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:32:05.019216    6792 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:32:05.019242    6792 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:1b:e7:65:87:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/disk.qcow2
	I0925 12:32:05.020816    6792 main.go:141] libmachine: STDOUT: 
	I0925 12:32:05.020837    6792 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:32:05.020848    6792 client.go:171] duration metric: took 255.8445ms to LocalClient.Create
	I0925 12:32:07.023036    6792 start.go:128] duration metric: took 2.314441459s to createHost
	I0925 12:32:07.023095    6792 start.go:83] releasing machines lock for "default-k8s-diff-port-022000", held for 2.314948291s
	W0925 12:32:07.023514    6792 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-022000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-022000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:32:07.034046    6792 out.go:201] 
	W0925 12:32:07.043188    6792 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:32:07.043213    6792 out.go:270] * 
	* 
	W0925 12:32:07.046016    6792 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:32:07.056082    6792 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-022000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000: exit status 7 (63.992917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-404000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000: exit status 7 (31.057833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-404000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-404000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-404000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.889958ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-404000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-404000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000: exit status 7 (29.531542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-404000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000: exit status 7 (29.312875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-404000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-404000 --alsologtostderr -v=1: exit status 83 (49.192417ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-404000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-404000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:31:59.949130    6814 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:31:59.949270    6814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:59.949273    6814 out.go:358] Setting ErrFile to fd 2...
	I0925 12:31:59.949275    6814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:31:59.949410    6814 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:31:59.949620    6814 out.go:352] Setting JSON to false
	I0925 12:31:59.949631    6814 mustload.go:65] Loading cluster: embed-certs-404000
	I0925 12:31:59.949854    6814 config.go:182] Loaded profile config "embed-certs-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:31:59.954515    6814 out.go:177] * The control-plane node embed-certs-404000 host is not running: state=Stopped
	I0925 12:31:59.965717    6814 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-404000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-404000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000: exit status 7 (29.329292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-404000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000: exit status 7 (29.246625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-554000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-554000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.0357285s)

                                                
                                                
-- stdout --
	* [newest-cni-554000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-554000" primary control-plane node in "newest-cni-554000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-554000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:32:00.272958    6831 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:32:00.273112    6831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:32:00.273115    6831 out.go:358] Setting ErrFile to fd 2...
	I0925 12:32:00.273118    6831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:32:00.273240    6831 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:32:00.274246    6831 out.go:352] Setting JSON to false
	I0925 12:32:00.290165    6831 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5491,"bootTime":1727287229,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:32:00.290243    6831 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:32:00.295488    6831 out.go:177] * [newest-cni-554000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:32:00.302424    6831 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:32:00.302476    6831 notify.go:220] Checking for updates...
	I0925 12:32:00.308534    6831 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:32:00.311907    6831 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:32:00.315494    6831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:32:00.318513    6831 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:32:00.321460    6831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:32:00.324869    6831 config.go:182] Loaded profile config "default-k8s-diff-port-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:32:00.324928    6831 config.go:182] Loaded profile config "multinode-761000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:32:00.324987    6831 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:32:00.329515    6831 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 12:32:00.336477    6831 start.go:297] selected driver: qemu2
	I0925 12:32:00.336485    6831 start.go:901] validating driver "qemu2" against <nil>
	I0925 12:32:00.336493    6831 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:32:00.338813    6831 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0925 12:32:00.338852    6831 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0925 12:32:00.342459    6831 out.go:177] * Automatically selected the socket_vmnet network
	I0925 12:32:00.349585    6831 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0925 12:32:00.349607    6831 cni.go:84] Creating CNI manager for ""
	I0925 12:32:00.349639    6831 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:32:00.349644    6831 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 12:32:00.349679    6831 start.go:340] cluster config:
	{Name:newest-cni-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:32:00.353278    6831 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:32:00.364288    6831 out.go:177] * Starting "newest-cni-554000" primary control-plane node in "newest-cni-554000" cluster
	I0925 12:32:00.372463    6831 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:32:00.372488    6831 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:32:00.372495    6831 cache.go:56] Caching tarball of preloaded images
	I0925 12:32:00.372583    6831 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:32:00.372589    6831 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:32:00.372654    6831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/newest-cni-554000/config.json ...
	I0925 12:32:00.372668    6831 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/newest-cni-554000/config.json: {Name:mkbe137f8c10f0b1708020b11538b4475456985d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 12:32:00.372909    6831 start.go:360] acquireMachinesLock for newest-cni-554000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:32:00.372944    6831 start.go:364] duration metric: took 28.667µs to acquireMachinesLock for "newest-cni-554000"
	I0925 12:32:00.372959    6831 start.go:93] Provisioning new machine with config: &{Name:newest-cni-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:32:00.372989    6831 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:32:00.393524    6831 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 12:32:00.414034    6831 start.go:159] libmachine.API.Create for "newest-cni-554000" (driver="qemu2")
	I0925 12:32:00.414070    6831 client.go:168] LocalClient.Create starting
	I0925 12:32:00.414143    6831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:32:00.414181    6831 main.go:141] libmachine: Decoding PEM data...
	I0925 12:32:00.414191    6831 main.go:141] libmachine: Parsing certificate...
	I0925 12:32:00.414225    6831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:32:00.414251    6831 main.go:141] libmachine: Decoding PEM data...
	I0925 12:32:00.414260    6831 main.go:141] libmachine: Parsing certificate...
	I0925 12:32:00.414709    6831 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:32:00.578536    6831 main.go:141] libmachine: Creating SSH key...
	I0925 12:32:00.664536    6831 main.go:141] libmachine: Creating Disk image...
	I0925 12:32:00.664542    6831 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:32:00.664747    6831 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/disk.qcow2
	I0925 12:32:00.673898    6831 main.go:141] libmachine: STDOUT: 
	I0925 12:32:00.673915    6831 main.go:141] libmachine: STDERR: 
	I0925 12:32:00.673977    6831 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/disk.qcow2 +20000M
	I0925 12:32:00.681695    6831 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:32:00.681709    6831 main.go:141] libmachine: STDERR: 
	I0925 12:32:00.681724    6831 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/disk.qcow2
	I0925 12:32:00.681729    6831 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:32:00.681742    6831 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:32:00.681771    6831 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:54:f7:e9:82:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/disk.qcow2
	I0925 12:32:00.683379    6831 main.go:141] libmachine: STDOUT: 
	I0925 12:32:00.683398    6831 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:32:00.683419    6831 client.go:171] duration metric: took 269.342791ms to LocalClient.Create
	I0925 12:32:02.685579    6831 start.go:128] duration metric: took 2.312566875s to createHost
	I0925 12:32:02.685624    6831 start.go:83] releasing machines lock for "newest-cni-554000", held for 2.312671375s
	W0925 12:32:02.685691    6831 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:32:02.695636    6831 out.go:177] * Deleting "newest-cni-554000" in qemu2 ...
	W0925 12:32:02.733240    6831 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:32:02.733257    6831 start.go:729] Will try again in 5 seconds ...
	I0925 12:32:07.735488    6831 start.go:360] acquireMachinesLock for newest-cni-554000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:32:07.735851    6831 start.go:364] duration metric: took 265.583µs to acquireMachinesLock for "newest-cni-554000"
	I0925 12:32:07.735952    6831 start.go:93] Provisioning new machine with config: &{Name:newest-cni-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 12:32:07.736260    6831 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 12:32:07.742042    6831 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 12:32:07.787686    6831 start.go:159] libmachine.API.Create for "newest-cni-554000" (driver="qemu2")
	I0925 12:32:07.787731    6831 client.go:168] LocalClient.Create starting
	I0925 12:32:07.787826    6831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/ca.pem
	I0925 12:32:07.787876    6831 main.go:141] libmachine: Decoding PEM data...
	I0925 12:32:07.787895    6831 main.go:141] libmachine: Parsing certificate...
	I0925 12:32:07.787965    6831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19681-1412/.minikube/certs/cert.pem
	I0925 12:32:07.787995    6831 main.go:141] libmachine: Decoding PEM data...
	I0925 12:32:07.788014    6831 main.go:141] libmachine: Parsing certificate...
	I0925 12:32:07.788518    6831 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0925 12:32:08.023035    6831 main.go:141] libmachine: Creating SSH key...
	I0925 12:32:08.208302    6831 main.go:141] libmachine: Creating Disk image...
	I0925 12:32:08.208316    6831 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 12:32:08.208561    6831 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/disk.qcow2
	I0925 12:32:08.218067    6831 main.go:141] libmachine: STDOUT: 
	I0925 12:32:08.218091    6831 main.go:141] libmachine: STDERR: 
	I0925 12:32:08.218157    6831 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/disk.qcow2 +20000M
	I0925 12:32:08.225953    6831 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 12:32:08.225969    6831 main.go:141] libmachine: STDERR: 
	I0925 12:32:08.225986    6831 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/disk.qcow2
	I0925 12:32:08.225993    6831 main.go:141] libmachine: Starting QEMU VM...
	I0925 12:32:08.226002    6831 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:32:08.226039    6831 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:72:0a:cb:36:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/disk.qcow2
	I0925 12:32:08.227684    6831 main.go:141] libmachine: STDOUT: 
	I0925 12:32:08.227696    6831 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:32:08.227712    6831 client.go:171] duration metric: took 439.976208ms to LocalClient.Create
	I0925 12:32:10.229903    6831 start.go:128] duration metric: took 2.493599375s to createHost
	I0925 12:32:10.229963    6831 start.go:83] releasing machines lock for "newest-cni-554000", held for 2.494102166s
	W0925 12:32:10.230341    6831 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:32:10.245173    6831 out.go:201] 
	W0925 12:32:10.249120    6831 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:32:10.249152    6831 out.go:270] * 
	* 
	W0925 12:32:10.251876    6831 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:32:10.266039    6831 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-554000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-554000 -n newest-cni-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-554000 -n newest-cni-554000: exit status 7 (65.037083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-022000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-022000 create -f testdata/busybox.yaml: exit status 1 (29.317166ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-022000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-022000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000: exit status 7 (29.341458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-022000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000: exit status 7 (28.897792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-022000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-022000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-022000 describe deploy/metrics-server -n kube-system: exit status 1 (27.586041ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-022000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-022000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000: exit status 7 (29.6345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-022000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-022000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.185327959s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-022000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-022000" primary control-plane node in "default-k8s-diff-port-022000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-022000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-022000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:32:10.658048    6891 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:32:10.658203    6891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:32:10.658207    6891 out.go:358] Setting ErrFile to fd 2...
	I0925 12:32:10.658209    6891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:32:10.658360    6891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:32:10.659384    6891 out.go:352] Setting JSON to false
	I0925 12:32:10.675209    6891 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5501,"bootTime":1727287229,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:32:10.675269    6891 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:32:10.680629    6891 out.go:177] * [default-k8s-diff-port-022000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:32:10.687569    6891 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:32:10.687638    6891 notify.go:220] Checking for updates...
	I0925 12:32:10.692789    6891 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:32:10.695547    6891 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:32:10.698601    6891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:32:10.701601    6891 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:32:10.705605    6891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:32:10.710038    6891 config.go:182] Loaded profile config "default-k8s-diff-port-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:32:10.710336    6891 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:32:10.714436    6891 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 12:32:10.721574    6891 start.go:297] selected driver: qemu2
	I0925 12:32:10.721581    6891 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:32:10.721634    6891 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:32:10.723796    6891 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 12:32:10.723823    6891 cni.go:84] Creating CNI manager for ""
	I0925 12:32:10.723856    6891 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:32:10.723878    6891 start.go:340] cluster config:
	{Name:default-k8s-diff-port-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-022000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:32:10.727272    6891 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:32:10.735508    6891 out.go:177] * Starting "default-k8s-diff-port-022000" primary control-plane node in "default-k8s-diff-port-022000" cluster
	I0925 12:32:10.738453    6891 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:32:10.738468    6891 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:32:10.738474    6891 cache.go:56] Caching tarball of preloaded images
	I0925 12:32:10.738529    6891 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:32:10.738535    6891 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:32:10.738592    6891 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/default-k8s-diff-port-022000/config.json ...
	I0925 12:32:10.738975    6891 start.go:360] acquireMachinesLock for default-k8s-diff-port-022000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:32:10.739005    6891 start.go:364] duration metric: took 23.541µs to acquireMachinesLock for "default-k8s-diff-port-022000"
	I0925 12:32:10.739015    6891 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:32:10.739020    6891 fix.go:54] fixHost starting: 
	I0925 12:32:10.739143    6891 fix.go:112] recreateIfNeeded on default-k8s-diff-port-022000: state=Stopped err=<nil>
	W0925 12:32:10.739152    6891 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:32:10.743569    6891 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-022000" ...
	I0925 12:32:10.750534    6891 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:32:10.750575    6891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:1b:e7:65:87:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/disk.qcow2
	I0925 12:32:10.752715    6891 main.go:141] libmachine: STDOUT: 
	I0925 12:32:10.752733    6891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:32:10.752766    6891 fix.go:56] duration metric: took 13.742875ms for fixHost
	I0925 12:32:10.752770    6891 start.go:83] releasing machines lock for "default-k8s-diff-port-022000", held for 13.760708ms
	W0925 12:32:10.752779    6891 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:32:10.752830    6891 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:32:10.752835    6891 start.go:729] Will try again in 5 seconds ...
	I0925 12:32:15.755042    6891 start.go:360] acquireMachinesLock for default-k8s-diff-port-022000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:32:15.755596    6891 start.go:364] duration metric: took 431.708µs to acquireMachinesLock for "default-k8s-diff-port-022000"
	I0925 12:32:15.755748    6891 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:32:15.755767    6891 fix.go:54] fixHost starting: 
	I0925 12:32:15.756598    6891 fix.go:112] recreateIfNeeded on default-k8s-diff-port-022000: state=Stopped err=<nil>
	W0925 12:32:15.756625    6891 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:32:15.766326    6891 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-022000" ...
	I0925 12:32:15.770228    6891 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:32:15.770443    6891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:1b:e7:65:87:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/default-k8s-diff-port-022000/disk.qcow2
	I0925 12:32:15.780872    6891 main.go:141] libmachine: STDOUT: 
	I0925 12:32:15.780933    6891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:32:15.781029    6891 fix.go:56] duration metric: took 25.262042ms for fixHost
	I0925 12:32:15.781045    6891 start.go:83] releasing machines lock for "default-k8s-diff-port-022000", held for 25.42675ms
	W0925 12:32:15.781239    6891 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-022000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-022000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:32:15.787262    6891 out.go:201] 
	W0925 12:32:15.791359    6891 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:32:15.791385    6891 out.go:270] * 
	* 
	W0925 12:32:15.793800    6891 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:32:15.802102    6891 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-022000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000: exit status 7 (67.223ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-554000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-554000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.183593833s)

                                                
                                                
-- stdout --
	* [newest-cni-554000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-554000" primary control-plane node in "newest-cni-554000" cluster
	* Restarting existing qemu2 VM for "newest-cni-554000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-554000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:32:14.220341    6918 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:32:14.220475    6918 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:32:14.220480    6918 out.go:358] Setting ErrFile to fd 2...
	I0925 12:32:14.220483    6918 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:32:14.220613    6918 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:32:14.221630    6918 out.go:352] Setting JSON to false
	I0925 12:32:14.237537    6918 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5505,"bootTime":1727287229,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 12:32:14.237606    6918 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 12:32:14.242516    6918 out.go:177] * [newest-cni-554000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 12:32:14.249463    6918 notify.go:220] Checking for updates...
	I0925 12:32:14.252467    6918 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 12:32:14.256393    6918 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 12:32:14.259426    6918 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 12:32:14.262480    6918 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 12:32:14.265440    6918 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 12:32:14.268425    6918 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 12:32:14.271843    6918 config.go:182] Loaded profile config "newest-cni-554000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:32:14.272112    6918 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 12:32:14.276378    6918 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 12:32:14.283459    6918 start.go:297] selected driver: qemu2
	I0925 12:32:14.283467    6918 start.go:901] validating driver "qemu2" against &{Name:newest-cni-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:32:14.283527    6918 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 12:32:14.285948    6918 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0925 12:32:14.285979    6918 cni.go:84] Creating CNI manager for ""
	I0925 12:32:14.286005    6918 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 12:32:14.286030    6918 start.go:340] cluster config:
	{Name:newest-cni-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-554000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 12:32:14.289722    6918 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 12:32:14.297400    6918 out.go:177] * Starting "newest-cni-554000" primary control-plane node in "newest-cni-554000" cluster
	I0925 12:32:14.301389    6918 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 12:32:14.301404    6918 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 12:32:14.301413    6918 cache.go:56] Caching tarball of preloaded images
	I0925 12:32:14.301466    6918 preload.go:172] Found /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 12:32:14.301472    6918 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 12:32:14.301523    6918 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/newest-cni-554000/config.json ...
	I0925 12:32:14.301976    6918 start.go:360] acquireMachinesLock for newest-cni-554000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:32:14.302005    6918 start.go:364] duration metric: took 23.542µs to acquireMachinesLock for "newest-cni-554000"
	I0925 12:32:14.302015    6918 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:32:14.302021    6918 fix.go:54] fixHost starting: 
	I0925 12:32:14.302146    6918 fix.go:112] recreateIfNeeded on newest-cni-554000: state=Stopped err=<nil>
	W0925 12:32:14.302155    6918 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:32:14.306461    6918 out.go:177] * Restarting existing qemu2 VM for "newest-cni-554000" ...
	I0925 12:32:14.314420    6918 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:32:14.314454    6918 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:72:0a:cb:36:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/disk.qcow2
	I0925 12:32:14.316451    6918 main.go:141] libmachine: STDOUT: 
	I0925 12:32:14.316470    6918 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:32:14.316502    6918 fix.go:56] duration metric: took 14.479417ms for fixHost
	I0925 12:32:14.316507    6918 start.go:83] releasing machines lock for "newest-cni-554000", held for 14.49675ms
	W0925 12:32:14.316514    6918 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:32:14.316551    6918 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:32:14.316556    6918 start.go:729] Will try again in 5 seconds ...
	I0925 12:32:19.318761    6918 start.go:360] acquireMachinesLock for newest-cni-554000: {Name:mkd4065c6d1c2136bf83ebc4338945aee4e59c6b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 12:32:19.319321    6918 start.go:364] duration metric: took 423.625µs to acquireMachinesLock for "newest-cni-554000"
	I0925 12:32:19.319459    6918 start.go:96] Skipping create...Using existing machine configuration
	I0925 12:32:19.319480    6918 fix.go:54] fixHost starting: 
	I0925 12:32:19.320275    6918 fix.go:112] recreateIfNeeded on newest-cni-554000: state=Stopped err=<nil>
	W0925 12:32:19.320301    6918 fix.go:138] unexpected machine state, will restart: <nil>
	I0925 12:32:19.328705    6918 out.go:177] * Restarting existing qemu2 VM for "newest-cni-554000" ...
	I0925 12:32:19.332749    6918 qemu.go:418] Using hvf for hardware acceleration
	I0925 12:32:19.332950    6918 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:72:0a:cb:36:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19681-1412/.minikube/machines/newest-cni-554000/disk.qcow2
	I0925 12:32:19.342546    6918 main.go:141] libmachine: STDOUT: 
	I0925 12:32:19.342607    6918 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 12:32:19.342678    6918 fix.go:56] duration metric: took 23.199959ms for fixHost
	I0925 12:32:19.342705    6918 start.go:83] releasing machines lock for "newest-cni-554000", held for 23.354083ms
	W0925 12:32:19.342856    6918 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-554000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-554000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 12:32:19.350725    6918 out.go:201] 
	W0925 12:32:19.353804    6918 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 12:32:19.353827    6918 out.go:270] * 
	* 
	W0925 12:32:19.356454    6918 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 12:32:19.363775    6918 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-554000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-554000 -n newest-cni-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-554000 -n newest-cni-554000: exit status 7 (68.787375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-022000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000: exit status 7 (31.78975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-022000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-022000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-022000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.705333ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-022000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-022000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000: exit status 7 (28.972375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-022000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000: exit status 7 (28.949792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-022000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-022000 --alsologtostderr -v=1: exit status 83 (39.664834ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-022000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-022000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:32:16.067918    6937 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:32:16.068069    6937 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:32:16.068073    6937 out.go:358] Setting ErrFile to fd 2...
	I0925 12:32:16.068075    6937 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:32:16.068197    6937 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:32:16.068419    6937 out.go:352] Setting JSON to false
	I0925 12:32:16.068427    6937 mustload.go:65] Loading cluster: default-k8s-diff-port-022000
	I0925 12:32:16.068641    6937 config.go:182] Loaded profile config "default-k8s-diff-port-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:32:16.072626    6937 out.go:177] * The control-plane node default-k8s-diff-port-022000 host is not running: state=Stopped
	I0925 12:32:16.076597    6937 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-022000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-022000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000: exit status 7 (28.970667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-022000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000: exit status 7 (28.587375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-554000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-554000 -n newest-cni-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-554000 -n newest-cni-554000: exit status 7 (30.109792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-554000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-554000 --alsologtostderr -v=1: exit status 83 (43.4335ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-554000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-554000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 12:32:19.550189    6961 out.go:345] Setting OutFile to fd 1 ...
	I0925 12:32:19.550372    6961 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:32:19.550376    6961 out.go:358] Setting ErrFile to fd 2...
	I0925 12:32:19.550378    6961 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 12:32:19.550506    6961 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 12:32:19.550733    6961 out.go:352] Setting JSON to false
	I0925 12:32:19.550741    6961 mustload.go:65] Loading cluster: newest-cni-554000
	I0925 12:32:19.550986    6961 config.go:182] Loaded profile config "newest-cni-554000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 12:32:19.555340    6961 out.go:177] * The control-plane node newest-cni-554000 host is not running: state=Stopped
	I0925 12:32:19.559371    6961 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-554000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-554000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-554000 -n newest-cni-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-554000 -n newest-cni-554000: exit status 7 (30.057333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-554000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-554000 -n newest-cni-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-554000 -n newest-cni-554000: exit status 7 (29.906334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (154/273)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 9.68
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 198.57
29 TestAddons/serial/Volcano 38.88
31 TestAddons/serial/GCPAuth/Namespaces 0.08
34 TestAddons/parallel/Ingress 18.66
35 TestAddons/parallel/InspektorGadget 10.31
36 TestAddons/parallel/MetricsServer 5.3
38 TestAddons/parallel/CSI 38.33
39 TestAddons/parallel/Headlamp 18.65
40 TestAddons/parallel/CloudSpanner 5.2
41 TestAddons/parallel/LocalPath 41.99
42 TestAddons/parallel/NvidiaDevicePlugin 6.16
43 TestAddons/parallel/Yakd 11.32
44 TestAddons/StoppedEnableDisable 12.4
52 TestHyperKitDriverInstallOrUpdate 11.33
55 TestErrorSpam/setup 35.23
56 TestErrorSpam/start 0.34
57 TestErrorSpam/status 0.24
58 TestErrorSpam/pause 0.65
59 TestErrorSpam/unpause 0.63
60 TestErrorSpam/stop 64.31
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 46.92
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 35.51
67 TestFunctional/serial/KubeContext 0.03
68 TestFunctional/serial/KubectlGetPods 0.04
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.95
72 TestFunctional/serial/CacheCmd/cache/add_local 1.15
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
74 TestFunctional/serial/CacheCmd/cache/list 0.04
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
76 TestFunctional/serial/CacheCmd/cache/cache_reload 0.65
77 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/serial/MinikubeKubectlCmd 2.32
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.02
80 TestFunctional/serial/ExtraConfig 36.91
81 TestFunctional/serial/ComponentHealth 0.04
82 TestFunctional/serial/LogsCmd 0.65
83 TestFunctional/serial/LogsFileCmd 0.62
84 TestFunctional/serial/InvalidService 3.56
86 TestFunctional/parallel/ConfigCmd 0.22
87 TestFunctional/parallel/DashboardCmd 7.45
88 TestFunctional/parallel/DryRun 0.22
89 TestFunctional/parallel/InternationalLanguage 0.11
90 TestFunctional/parallel/StatusCmd 0.24
95 TestFunctional/parallel/AddonsCmd 0.1
96 TestFunctional/parallel/PersistentVolumeClaim 23.98
98 TestFunctional/parallel/SSHCmd 0.12
99 TestFunctional/parallel/CpCmd 0.39
101 TestFunctional/parallel/FileSync 0.06
102 TestFunctional/parallel/CertSync 0.48
106 TestFunctional/parallel/NodeLabels 0.04
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.1
110 TestFunctional/parallel/License 0.33
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
118 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
120 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
122 TestFunctional/parallel/ServiceCmd/DeployApp 6.08
123 TestFunctional/parallel/ServiceCmd/List 0.29
124 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
125 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
126 TestFunctional/parallel/ServiceCmd/Format 0.09
127 TestFunctional/parallel/ServiceCmd/URL 0.09
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
129 TestFunctional/parallel/ProfileCmd/profile_list 0.13
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.14
131 TestFunctional/parallel/MountCmd/any-port 6.2
132 TestFunctional/parallel/MountCmd/specific-port 0.76
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.21
134 TestFunctional/parallel/Version/short 0.04
135 TestFunctional/parallel/Version/components 0.2
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
140 TestFunctional/parallel/ImageCommands/ImageBuild 1.89
141 TestFunctional/parallel/ImageCommands/Setup 1.7
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.43
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.37
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.12
145 TestFunctional/parallel/DockerEnv/bash 0.28
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.14
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.25
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.17
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 183.6
160 TestMultiControlPlane/serial/DeployApp 4.61
161 TestMultiControlPlane/serial/PingHostFromPods 0.73
162 TestMultiControlPlane/serial/AddWorkerNode 56.76
163 TestMultiControlPlane/serial/NodeLabels 0.13
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.31
165 TestMultiControlPlane/serial/CopyFile 4.38
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 75.1
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 3.83
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.03
258 TestStoppedBinaryUpgrade/Setup 1.11
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
275 TestNoKubernetes/serial/ProfileList 31.46
276 TestNoKubernetes/serial/Stop 3.39
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
288 TestStoppedBinaryUpgrade/MinikubeLogs 0.71
293 TestStartStop/group/old-k8s-version/serial/Stop 3.61
294 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
306 TestStartStop/group/no-preload/serial/Stop 3.19
309 TestStartStop/group/embed-certs/serial/Stop 3.89
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
312 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.19
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
331 TestStartStop/group/newest-cni/serial/Stop 3.67
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.1
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0925 11:29:09.456490    1934 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0925 11:29:09.456836    1934 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-539000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-539000: exit status 85 (96.893291ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-539000 | jenkins | v1.34.0 | 25 Sep 24 11:28 PDT |          |
	|         | -p download-only-539000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/25 11:28:44
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 11:28:44.079751    1935 out.go:345] Setting OutFile to fd 1 ...
	I0925 11:28:44.080179    1935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 11:28:44.080184    1935 out.go:358] Setting ErrFile to fd 2...
	I0925 11:28:44.080187    1935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 11:28:44.080339    1935 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	W0925 11:28:44.080439    1935 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19681-1412/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19681-1412/.minikube/config/config.json: no such file or directory
	I0925 11:28:44.081580    1935 out.go:352] Setting JSON to true
	I0925 11:28:44.098840    1935 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1695,"bootTime":1727287229,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 11:28:44.098915    1935 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 11:28:44.105105    1935 out.go:97] [download-only-539000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 11:28:44.105289    1935 notify.go:220] Checking for updates...
	W0925 11:28:44.105351    1935 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball: no such file or directory
	I0925 11:28:44.109045    1935 out.go:169] MINIKUBE_LOCATION=19681
	I0925 11:28:44.112115    1935 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 11:28:44.117102    1935 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 11:28:44.120085    1935 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 11:28:44.123065    1935 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	W0925 11:28:44.129019    1935 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0925 11:28:44.129231    1935 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 11:28:44.133071    1935 out.go:97] Using the qemu2 driver based on user configuration
	I0925 11:28:44.133091    1935 start.go:297] selected driver: qemu2
	I0925 11:28:44.133105    1935 start.go:901] validating driver "qemu2" against <nil>
	I0925 11:28:44.133186    1935 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 11:28:44.137071    1935 out.go:169] Automatically selected the socket_vmnet network
	I0925 11:28:44.142802    1935 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0925 11:28:44.142890    1935 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0925 11:28:44.142953    1935 cni.go:84] Creating CNI manager for ""
	I0925 11:28:44.142986    1935 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 11:28:44.143033    1935 start.go:340] cluster config:
	{Name:download-only-539000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 11:28:44.148358    1935 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:28:44.153129    1935 out.go:97] Downloading VM boot image ...
	I0925 11:28:44.153159    1935 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I0925 11:28:57.903553    1935 out.go:97] Starting "download-only-539000" primary control-plane node in "download-only-539000" cluster
	I0925 11:28:57.903579    1935 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0925 11:28:57.958896    1935 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0925 11:28:57.958906    1935 cache.go:56] Caching tarball of preloaded images
	I0925 11:28:57.959145    1935 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0925 11:28:57.965275    1935 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0925 11:28:57.965282    1935 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0925 11:28:58.069296    1935 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0925 11:29:08.133186    1935 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0925 11:29:08.133362    1935 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0925 11:29:08.828236    1935 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0925 11:29:08.828433    1935 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/download-only-539000/config.json ...
	I0925 11:29:08.828450    1935 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/download-only-539000/config.json: {Name:mk750938212cabbaa9b599ff882d97e51fcdd3d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 11:29:08.828689    1935 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0925 11:29:08.828884    1935 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0925 11:29:09.413158    1935 out.go:193] 
	W0925 11:29:09.418036    1935 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19681-1412/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108aa56c0 0x108aa56c0 0x108aa56c0 0x108aa56c0 0x108aa56c0 0x108aa56c0 0x108aa56c0] Decompressors:map[bz2:0x14000121dd0 gz:0x14000121dd8 tar:0x14000121d00 tar.bz2:0x14000121d20 tar.gz:0x14000121d30 tar.xz:0x14000121da0 tar.zst:0x14000121db0 tbz2:0x14000121d20 tgz:0x14000121d30 txz:0x14000121da0 tzst:0x14000121db0 xz:0x14000121e00 zip:0x14000121e10 zst:0x14000121e08] Getters:map[file:0x14000812bc0 http:0x1400017ceb0 https:0x1400017cf00] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0925 11:29:09.418067    1935 out_reason.go:110] 
	W0925 11:29:09.425135    1935 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 11:29:09.428962    1935 out.go:193] 
	
	
	* The control-plane node download-only-539000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-539000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-539000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (9.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-953000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-953000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (9.683143042s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (9.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0925 11:29:19.497326    1934 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0925 11:29:19.497380    1934 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-953000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-953000: exit status 85 (81.269083ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-539000 | jenkins | v1.34.0 | 25 Sep 24 11:28 PDT |                     |
	|         | -p download-only-539000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 25 Sep 24 11:29 PDT | 25 Sep 24 11:29 PDT |
	| delete  | -p download-only-539000        | download-only-539000 | jenkins | v1.34.0 | 25 Sep 24 11:29 PDT | 25 Sep 24 11:29 PDT |
	| start   | -o=json --download-only        | download-only-953000 | jenkins | v1.34.0 | 25 Sep 24 11:29 PDT |                     |
	|         | -p download-only-953000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/25 11:29:09
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 11:29:09.841358    1965 out.go:345] Setting OutFile to fd 1 ...
	I0925 11:29:09.841485    1965 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 11:29:09.841488    1965 out.go:358] Setting ErrFile to fd 2...
	I0925 11:29:09.841490    1965 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 11:29:09.841624    1965 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 11:29:09.842685    1965 out.go:352] Setting JSON to true
	I0925 11:29:09.858573    1965 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1720,"bootTime":1727287229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 11:29:09.858662    1965 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 11:29:09.862119    1965 out.go:97] [download-only-953000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 11:29:09.862193    1965 notify.go:220] Checking for updates...
	I0925 11:29:09.866121    1965 out.go:169] MINIKUBE_LOCATION=19681
	I0925 11:29:09.869186    1965 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 11:29:09.873196    1965 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 11:29:09.876186    1965 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 11:29:09.879116    1965 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	W0925 11:29:09.885171    1965 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0925 11:29:09.885339    1965 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 11:29:09.886777    1965 out.go:97] Using the qemu2 driver based on user configuration
	I0925 11:29:09.886784    1965 start.go:297] selected driver: qemu2
	I0925 11:29:09.886787    1965 start.go:901] validating driver "qemu2" against <nil>
	I0925 11:29:09.886820    1965 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 11:29:09.890096    1965 out.go:169] Automatically selected the socket_vmnet network
	I0925 11:29:09.895259    1965 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0925 11:29:09.895360    1965 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0925 11:29:09.895383    1965 cni.go:84] Creating CNI manager for ""
	I0925 11:29:09.895411    1965 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 11:29:09.895416    1965 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 11:29:09.895459    1965 start.go:340] cluster config:
	{Name:download-only-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 11:29:09.898850    1965 iso.go:125] acquiring lock: {Name:mk243768dce01b6ef24c39f50004b46afa818b6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:29:09.902192    1965 out.go:97] Starting "download-only-953000" primary control-plane node in "download-only-953000" cluster
	I0925 11:29:09.902199    1965 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 11:29:09.965878    1965 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0925 11:29:09.965894    1965 cache.go:56] Caching tarball of preloaded images
	I0925 11:29:09.966056    1965 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 11:29:09.970259    1965 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0925 11:29:09.970268    1965 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0925 11:29:10.052945    1965 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19681-1412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-953000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-953000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-953000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-587000
addons_test.go:975: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-587000: exit status 85 (58.060833ms)

                                                
                                                
-- stdout --
	* Profile "addons-587000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-587000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-587000
addons_test.go:986: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-587000: exit status 85 (54.105959ms)

                                                
                                                
-- stdout --
	* Profile "addons-587000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-587000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (198.57s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-587000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-darwin-arm64 start -p addons-587000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m18.565202208s)
--- PASS: TestAddons/Setup (198.57s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.88s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:835: volcano-scheduler stabilized in 7.512917ms
addons_test.go:851: volcano-controller stabilized in 7.580792ms
addons_test.go:843: volcano-admission stabilized in 7.900834ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-pclb4" [f3dde944-f707-493b-bfe5-6ec1e123e00f] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.009741208s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-q8rvk" [38b32728-2e8c-4a40-9e10-e265e14306fa] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.007601875s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-9kx4r" [03a51b4a-4c67-4441-bcee-d1b6f1b7c24f] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005916167s
addons_test.go:870: (dbg) Run:  kubectl --context addons-587000 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-587000 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-587000 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [9cbcc2a4-0c89-41a0-b158-ed484d9ab5db] Pending
helpers_test.go:344: "test-job-nginx-0" [9cbcc2a4-0c89-41a0-b158-ed484d9ab5db] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [9cbcc2a4-0c89-41a0-b158-ed484d9ab5db] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.005896125s
addons_test.go:906: (dbg) Run:  out/minikube-darwin-arm64 -p addons-587000 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-darwin-arm64 -p addons-587000 addons disable volcano --alsologtostderr -v=1: (10.63033175s)
--- PASS: TestAddons/serial/Volcano (38.88s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-587000 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-587000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-587000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-587000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-587000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6a1b1ae9-d171-41a3-9755-7ad162954652] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6a1b1ae9-d171-41a3-9755-7ad162954652] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.005903333s
I0925 11:43:01.460210    1934 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p addons-587000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-587000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-darwin-arm64 -p addons-587000 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p addons-587000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:309: (dbg) Run:  out/minikube-darwin-arm64 -p addons-587000 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-darwin-arm64 -p addons-587000 addons disable ingress --alsologtostderr -v=1: (7.284874958s)
--- PASS: TestAddons/parallel/Ingress (18.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5tkl8" [605ec4b1-4dd8-431d-a80f-f225a6b32ca9] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007594834s
addons_test.go:789: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-587000
addons_test.go:789: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-587000: (5.300304333s)
--- PASS: TestAddons/parallel/InspektorGadget (10.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 1.318125ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-wbcvk" [bcee8b88-faae-4d9c-97d7-faf4a0f60c4d] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005008s
addons_test.go:413: (dbg) Run:  kubectl --context addons-587000 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p addons-587000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0925 11:42:37.128090    1934 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0925 11:42:37.130595    1934 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0925 11:42:37.130603    1934 kapi.go:107] duration metric: took 2.543916ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 2.547708ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-587000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-587000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d880e808-bf83-4175-a2f7-f4e0dc5d5cc4] Pending
helpers_test.go:344: "task-pv-pod" [d880e808-bf83-4175-a2f7-f4e0dc5d5cc4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d880e808-bf83-4175-a2f7-f4e0dc5d5cc4] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.006687s
addons_test.go:528: (dbg) Run:  kubectl --context addons-587000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-587000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-587000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-587000 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-587000 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-587000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-587000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [155979f1-0244-4c57-9c42-13a5313ad771] Pending
helpers_test.go:344: "task-pv-pod-restore" [155979f1-0244-4c57-9c42-13a5313ad771] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [155979f1-0244-4c57-9c42-13a5313ad771] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003746542s
addons_test.go:570: (dbg) Run:  kubectl --context addons-587000 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-587000 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-587000 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-darwin-arm64 -p addons-587000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-darwin-arm64 -p addons-587000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.1048095s)
addons_test.go:586: (dbg) Run:  out/minikube-darwin-arm64 -p addons-587000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (38.33s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-587000 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-km5zv" [2301f963-b8f6-40fb-87db-518c80d5642b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-km5zv" [2301f963-b8f6-40fb-87db-518c80d5642b] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.010138291s
addons_test.go:777: (dbg) Run:  out/minikube-darwin-arm64 -p addons-587000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-darwin-arm64 -p addons-587000 addons disable headlamp --alsologtostderr -v=1: (5.300793208s)
--- PASS: TestAddons/parallel/Headlamp (18.65s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-psdp2" [bd5d7c03-b5f8-4fdc-9411-cd4bdbc874ac] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005485292s
addons_test.go:808: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-587000
--- PASS: TestAddons/parallel/CloudSpanner (5.20s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (41.99s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-587000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-587000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-587000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d924d625-d9f6-4e93-a24c-fc89fd6b6ba2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d924d625-d9f6-4e93-a24c-fc89fd6b6ba2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d924d625-d9f6-4e93-a24c-fc89fd6b6ba2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005279208s
addons_test.go:938: (dbg) Run:  kubectl --context addons-587000 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-darwin-arm64 -p addons-587000 ssh "cat /opt/local-path-provisioner/pvc-d6f86e1e-adfd-42c5-97b3-7dd574cb793e_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-587000 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-587000 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-darwin-arm64 -p addons-587000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-darwin-arm64 -p addons-587000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.485855333s)
--- PASS: TestAddons/parallel/LocalPath (41.99s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-t8p54" [c38de65c-7c82-40c9-822e-d674e05e12fd] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004850875s
addons_test.go:1002: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-587000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-dltl6" [03036bd6-2c65-469c-b797-2f95ca204436] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.009933375s
addons_test.go:1014: (dbg) Run:  out/minikube-darwin-arm64 -p addons-587000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-darwin-arm64 -p addons-587000 addons disable yakd --alsologtostderr -v=1: (5.310202166s)
--- PASS: TestAddons/parallel/Yakd (11.32s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-587000
addons_test.go:170: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-587000: (12.210897875s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-587000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-587000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-587000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.33s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I0925 12:17:35.288242    1934 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0925 12:17:35.288432    1934 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W0925 12:17:37.277591    1934 install.go:62] docker-machine-driver-hyperkit: exit status 1
W0925 12:17:37.277894    1934 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0925 12:17:37.277941    1934 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2044776198/001/docker-machine-driver-hyperkit
I0925 12:17:37.812138    1934 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2044776198/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10675ad40 0x10675ad40 0x10675ad40 0x10675ad40 0x10675ad40 0x10675ad40 0x10675ad40] Decompressors:map[bz2:0x140005d95d0 gz:0x140005d95d8 tar:0x140005d9580 tar.bz2:0x140005d9590 tar.gz:0x140005d95a0 tar.xz:0x140005d95b0 tar.zst:0x140005d95c0 tbz2:0x140005d9590 tgz:0x140005d95a0 txz:0x140005d95b0 tzst:0x140005d95c0 xz:0x140005d95e0 zip:0x140005d95f0 zst:0x140005d95e8] Getters:map[file:0x14000bcead0 http:0x14001c512c0 https:0x14001c51310] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0925 12:17:37.812274    1934 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2044776198/001/docker-machine-driver-hyperkit
E0925 12:17:38.801299    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestHyperKitDriverInstallOrUpdate (11.33s)

                                                
                                    
x
+
TestErrorSpam/setup (35.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-905000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-905000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 --driver=qemu2 : (35.232838166s)
--- PASS: TestErrorSpam/setup (35.23s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-905000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-905000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-905000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-905000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-905000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-905000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-905000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-905000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-905000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 pause
--- PASS: TestErrorSpam/pause (0.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-905000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-905000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-905000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 unpause
--- PASS: TestErrorSpam/unpause (0.63s)

                                                
                                    
x
+
TestErrorSpam/stop (64.31s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-905000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-905000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 stop: (12.206105708s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-905000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-905000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 stop: (26.051228083s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-905000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-905000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-905000 stop: (26.05446525s)
--- PASS: TestErrorSpam/stop (64.31s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19681-1412/.minikube/files/etc/test/nested/copy/1934/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-251000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-251000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (46.915046541s)
--- PASS: TestFunctional/serial/StartWithProxy (46.92s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.51s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0925 11:45:56.579065    1934 config.go:182] Loaded profile config "functional-251000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-251000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-251000 --alsologtostderr -v=8: (35.506547292s)
functional_test.go:663: soft start took 35.50697025s for "functional-251000" cluster.
I0925 11:46:32.085100    1934 config.go:182] Loaded profile config "functional-251000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (35.51s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-251000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-251000 cache add registry.k8s.io/pause:3.1: (1.2121525s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-251000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local368694873/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 cache add minikube-local-cache-test:functional-251000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 cache delete minikube-local-cache-test:functional-251000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-251000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-251000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (65.084958ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.32s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 kubectl -- --context functional-251000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-arm64 -p functional-251000 kubectl -- --context functional-251000 get pods: (2.314951417s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (2.32s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-251000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-251000 get pods: (1.017662s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-251000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-251000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.9094295s)
functional_test.go:761: restart took 36.909529542s for "functional-251000" cluster.
I0925 11:47:17.363586    1934 config.go:182] Loaded profile config "functional-251000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (36.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-251000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd106124793/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.62s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.56s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-251000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-251000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-251000: exit status 115 (143.27075ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31502 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-251000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-251000 config get cpus: exit status 14 (35.212417ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-251000 config get cpus: exit status 14 (32.382541ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-251000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-251000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3203: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-251000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-251000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (113.918833ms)

                                                
                                                
-- stdout --
	* [functional-251000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 11:48:03.514131    3188 out.go:345] Setting OutFile to fd 1 ...
	I0925 11:48:03.514265    3188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 11:48:03.514269    3188 out.go:358] Setting ErrFile to fd 2...
	I0925 11:48:03.514271    3188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 11:48:03.514392    3188 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 11:48:03.515387    3188 out.go:352] Setting JSON to false
	I0925 11:48:03.531654    3188 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2854,"bootTime":1727287229,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 11:48:03.531731    3188 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 11:48:03.536270    3188 out.go:177] * [functional-251000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0925 11:48:03.543223    3188 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 11:48:03.543287    3188 notify.go:220] Checking for updates...
	I0925 11:48:03.549250    3188 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 11:48:03.552266    3188 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 11:48:03.555240    3188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 11:48:03.558221    3188 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 11:48:03.561202    3188 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 11:48:03.564601    3188 config.go:182] Loaded profile config "functional-251000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 11:48:03.564875    3188 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 11:48:03.569138    3188 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 11:48:03.576198    3188 start.go:297] selected driver: qemu2
	I0925 11:48:03.576204    3188 start.go:901] validating driver "qemu2" against &{Name:functional-251000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-251000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 11:48:03.576250    3188 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 11:48:03.582170    3188 out.go:201] 
	W0925 11:48:03.586277    3188 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0925 11:48:03.589153    3188 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-251000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-251000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-251000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (113.6435ms)

                                                
                                                
-- stdout --
	* [functional-251000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 11:48:03.394792    3184 out.go:345] Setting OutFile to fd 1 ...
	I0925 11:48:03.394903    3184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 11:48:03.394907    3184 out.go:358] Setting ErrFile to fd 2...
	I0925 11:48:03.394909    3184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 11:48:03.395036    3184 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
	I0925 11:48:03.396480    3184 out.go:352] Setting JSON to false
	I0925 11:48:03.413710    3184 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2854,"bootTime":1727287229,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 11:48:03.413806    3184 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0925 11:48:03.419277    3184 out.go:177] * [functional-251000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0925 11:48:03.426214    3184 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 11:48:03.426218    3184 notify.go:220] Checking for updates...
	I0925 11:48:03.434209    3184 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	I0925 11:48:03.437224    3184 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 11:48:03.440211    3184 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 11:48:03.443250    3184 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	I0925 11:48:03.446153    3184 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 11:48:03.449548    3184 config.go:182] Loaded profile config "functional-251000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 11:48:03.449820    3184 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 11:48:03.454195    3184 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0925 11:48:03.461186    3184 start.go:297] selected driver: qemu2
	I0925 11:48:03.461191    3184 start.go:901] validating driver "qemu2" against &{Name:functional-251000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-251000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 11:48:03.461235    3184 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 11:48:03.467221    3184 out.go:201] 
	W0925 11:48:03.471217    3184 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0925 11:48:03.475193    3184 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8c0db2b1-f8f7-4b9a-9751-0a146a31b429] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006983042s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-251000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-251000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-251000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-251000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2493c5c6-c3f8-41a2-8368-da5d170d83bd] Pending
helpers_test.go:344: "sp-pod" [2493c5c6-c3f8-41a2-8368-da5d170d83bd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2493c5c6-c3f8-41a2-8368-da5d170d83bd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.009076166s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-251000 exec sp-pod -- touch /tmp/mount/foo
E0925 11:47:38.872681    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
E0925 11:47:38.916116    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-251000 delete -f testdata/storage-provisioner/pod.yaml
E0925 11:47:38.998355    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
E0925 11:47:39.161750    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-251000 apply -f testdata/storage-provisioner/pod.yaml
E0925 11:47:39.485161    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [05b1cf63-4cfb-4a67-9561-d15524410ae6] Pending
E0925 11:47:40.128481    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [05b1cf63-4cfb-4a67-9561-d15524410ae6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0925 11:47:41.412203    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [05b1cf63-4cfb-4a67-9561-d15524410ae6] Running
E0925 11:47:43.975946    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00989225s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-251000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.98s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh -n functional-251000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 cp functional-251000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd498567613/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh -n functional-251000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh -n functional-251000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1934/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "sudo cat /etc/test/nested/copy/1934/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1934.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "sudo cat /etc/ssl/certs/1934.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1934.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "sudo cat /usr/share/ca-certificates/1934.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/19342.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "sudo cat /etc/ssl/certs/19342.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/19342.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "sudo cat /usr/share/ca-certificates/19342.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-251000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-251000 ssh "sudo systemctl is-active crio": exit status 1 (99.951625ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-251000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-251000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-251000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3033: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-251000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-251000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-251000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d9e1dd39-5ea6-4b4f-b1ff-691b3fe40cfa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d9e1dd39-5ea6-4b4f-b1ff-691b3fe40cfa] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.0034215s
I0925 11:47:32.924808    1934 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-251000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.140.238 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I0925 11:47:32.985854    1934 config.go:182] Loaded profile config "functional-251000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I0925 11:47:33.024730    1934 config.go:182] Loaded profile config "functional-251000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-251000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-251000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-251000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-cgdns" [e334014c-6953-4eac-a21a-14f40883b521] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-cgdns" [e334014c-6953-4eac-a21a-14f40883b521] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0925 11:47:49.099644    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.005175166s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 service list -o json
functional_test.go:1494: Took "282.869417ms" to run "out/minikube-darwin-arm64 -p functional-251000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:30887
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:30887
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "93.952208ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "34.608542ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "102.440625ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "39.601667ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-251000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1468375659/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727290073957401000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1468375659/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727290073957401000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1468375659/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727290073957401000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1468375659/001/test-1727290073957401000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-251000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (58.555166ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0925 11:47:54.016753    1934 retry.go:31] will retry after 507.887875ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 25 18:47 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 25 18:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 25 18:47 test-1727290073957401000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh cat /mount-9p/test-1727290073957401000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-251000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3529e6c1-bec0-4f8d-b780-03b09af1ae5e] Pending
helpers_test.go:344: "busybox-mount" [3529e6c1-bec0-4f8d-b780-03b09af1ae5e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3529e6c1-bec0-4f8d-b780-03b09af1ae5e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0925 11:47:59.343485    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [3529e6c1-bec0-4f8d-b780-03b09af1ae5e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005079042s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-251000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-251000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1468375659/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-251000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2735828830/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-251000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (58.858167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0925 11:48:00.219501    1934 retry.go:31] will retry after 264.169961ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-251000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2735828830/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-251000 ssh "sudo umount -f /mount-9p": exit status 1 (60.551458ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-251000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-251000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2735828830/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-251000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4278592423/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-251000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4278592423/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-251000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4278592423/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-251000 ssh "findmnt -T" /mount2: exit status 1 (61.677208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0925 11:48:01.775516    1934 retry.go:31] will retry after 402.773909ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-251000 ssh "findmnt -T" /mount2: exit status 1 (59.332125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0925 11:48:02.314895    1934 retry.go:31] will retry after 560.789696ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-251000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-251000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4278592423/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-251000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4278592423/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-251000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4278592423/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-251000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-251000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-251000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-251000 image ls --format short --alsologtostderr:
I0925 11:48:12.888061    3343 out.go:345] Setting OutFile to fd 1 ...
I0925 11:48:12.888246    3343 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 11:48:12.888251    3343 out.go:358] Setting ErrFile to fd 2...
I0925 11:48:12.888254    3343 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 11:48:12.888409    3343 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
I0925 11:48:12.888893    3343 config.go:182] Loaded profile config "functional-251000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 11:48:12.888956    3343 config.go:182] Loaded profile config "functional-251000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 11:48:12.890297    3343 ssh_runner.go:195] Run: systemctl --version
I0925 11:48:12.890306    3343 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/functional-251000/id_rsa Username:docker}
I0925 11:48:12.914731    3343 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-251000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-251000 | 9cd847e3a5608 | 30B    |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| docker.io/kicbase/echo-server               | functional-251000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-251000 image ls --format table --alsologtostderr:
I0925 11:48:13.040812    3352 out.go:345] Setting OutFile to fd 1 ...
I0925 11:48:13.040968    3352 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 11:48:13.040973    3352 out.go:358] Setting ErrFile to fd 2...
I0925 11:48:13.040975    3352 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 11:48:13.041107    3352 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
I0925 11:48:13.041557    3352 config.go:182] Loaded profile config "functional-251000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 11:48:13.041616    3352 config.go:182] Loaded profile config "functional-251000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 11:48:13.042437    3352 ssh_runner.go:195] Run: systemctl --version
I0925 11:48:13.042446    3352 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/functional-251000/id_rsa Username:docker}
I0925 11:48:13.064764    3352 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-251000 image ls --format json --alsologtostderr:
[{"id":"9cd847e3a5608aa9f610275a6960a4accb641356b14bf8adb1720a810b75fb3b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-251000"],"size":"30"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"ce2d2cda2d858fdaea84129deb8
6d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-251000"],"size":"4780000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","rep
oDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"
size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-251000 image ls --format json --alsologtostderr:
I0925 11:48:12.969790    3348 out.go:345] Setting OutFile to fd 1 ...
I0925 11:48:12.969943    3348 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 11:48:12.969947    3348 out.go:358] Setting ErrFile to fd 2...
I0925 11:48:12.969949    3348 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 11:48:12.970107    3348 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
I0925 11:48:12.970559    3348 config.go:182] Loaded profile config "functional-251000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 11:48:12.970626    3348 config.go:182] Loaded profile config "functional-251000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 11:48:12.971413    3348 ssh_runner.go:195] Run: systemctl --version
I0925 11:48:12.971422    3348 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/functional-251000/id_rsa Username:docker}
I0925 11:48:12.994500    3348 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-251000 image ls --format yaml --alsologtostderr:
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-251000
size: "4780000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 9cd847e3a5608aa9f610275a6960a4accb641356b14bf8adb1720a810b75fb3b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-251000
size: "30"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-251000 image ls --format yaml --alsologtostderr:
I0925 11:48:12.888091    3344 out.go:345] Setting OutFile to fd 1 ...
I0925 11:48:12.888253    3344 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 11:48:12.888256    3344 out.go:358] Setting ErrFile to fd 2...
I0925 11:48:12.888259    3344 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 11:48:12.888410    3344 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
I0925 11:48:12.888856    3344 config.go:182] Loaded profile config "functional-251000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 11:48:12.888920    3344 config.go:182] Loaded profile config "functional-251000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 11:48:12.889807    3344 ssh_runner.go:195] Run: systemctl --version
I0925 11:48:12.889817    3344 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/functional-251000/id_rsa Username:docker}
I0925 11:48:12.914719    3344 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-251000 ssh pgrep buildkitd: exit status 1 (59.498791ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 image build -t localhost/my-image:functional-251000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-251000 image build -t localhost/my-image:functional-251000 testdata/build --alsologtostderr: (1.757338875s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-251000 image build -t localhost/my-image:functional-251000 testdata/build --alsologtostderr:
I0925 11:48:13.021139    3351 out.go:345] Setting OutFile to fd 1 ...
I0925 11:48:13.021367    3351 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 11:48:13.021372    3351 out.go:358] Setting ErrFile to fd 2...
I0925 11:48:13.021375    3351 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 11:48:13.021515    3351 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19681-1412/.minikube/bin
I0925 11:48:13.021990    3351 config.go:182] Loaded profile config "functional-251000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 11:48:13.022766    3351 config.go:182] Loaded profile config "functional-251000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 11:48:13.023672    3351 ssh_runner.go:195] Run: systemctl --version
I0925 11:48:13.023681    3351 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19681-1412/.minikube/machines/functional-251000/id_rsa Username:docker}
I0925 11:48:13.047327    3351 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.4098480664.tar
I0925 11:48:13.047396    3351 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0925 11:48:13.051391    3351 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4098480664.tar
I0925 11:48:13.053138    3351 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4098480664.tar: stat -c "%s %y" /var/lib/minikube/build/build.4098480664.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4098480664.tar': No such file or directory
I0925 11:48:13.053158    3351 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.4098480664.tar --> /var/lib/minikube/build/build.4098480664.tar (3072 bytes)
I0925 11:48:13.062754    3351 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4098480664
I0925 11:48:13.068394    3351 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4098480664 -xf /var/lib/minikube/build/build.4098480664.tar
I0925 11:48:13.075606    3351 docker.go:360] Building image: /var/lib/minikube/build/build.4098480664
I0925 11:48:13.075668    3351 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-251000 /var/lib/minikube/build/build.4098480664
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers done
#8 writing image sha256:a60bd0afdd4b70e85810af7020f36c9305730f7a8ec63e01a1fff10c4e28a0dd done
#8 naming to localhost/my-image:functional-251000 done
#8 DONE 0.0s
I0925 11:48:14.731633    3351 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-251000 /var/lib/minikube/build/build.4098480664: (1.6559885s)
I0925 11:48:14.731706    3351 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4098480664
I0925 11:48:14.735715    3351 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4098480664.tar
I0925 11:48:14.739060    3351 build_images.go:217] Built localhost/my-image:functional-251000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.4098480664.tar
I0925 11:48:14.739076    3351 build_images.go:133] succeeded building to: functional-251000
I0925 11:48:14.739080    3351 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.68654875s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-251000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 image load --daemon kicbase/echo-server:functional-251000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 image load --daemon kicbase/echo-server:functional-251000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 image ls
2024/09/25 11:48:10 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-251000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 image load --daemon kicbase/echo-server:functional-251000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-251000 docker-env) && out/minikube-darwin-arm64 status -p functional-251000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-251000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 image save kicbase/echo-server:functional-251000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 image rm kicbase/echo-server:functional-251000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-251000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-251000 image save --daemon kicbase/echo-server:functional-251000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-251000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-251000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-251000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-251000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (183.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-813000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0925 11:48:19.825697    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
E0925 11:49:00.786945    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
E0925 11:50:22.708777    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/addons-587000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-813000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m3.400225541s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (183.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-813000 -- rollout status deployment/busybox: (2.9621895s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- exec busybox-7dff88458-28mfl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- exec busybox-7dff88458-9bdv6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- exec busybox-7dff88458-pv794 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- exec busybox-7dff88458-28mfl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- exec busybox-7dff88458-9bdv6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- exec busybox-7dff88458-pv794 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- exec busybox-7dff88458-28mfl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- exec busybox-7dff88458-9bdv6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- exec busybox-7dff88458-pv794 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- exec busybox-7dff88458-28mfl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- exec busybox-7dff88458-28mfl -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- exec busybox-7dff88458-9bdv6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- exec busybox-7dff88458-9bdv6 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- exec busybox-7dff88458-pv794 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-813000 -- exec busybox-7dff88458-pv794 -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-813000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-813000 -v=7 --alsologtostderr: (56.53435925s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-813000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp testdata/cp-test.txt ha-813000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp ha-813000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile576404734/001/cp-test_ha-813000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp ha-813000:/home/docker/cp-test.txt ha-813000-m02:/home/docker/cp-test_ha-813000_ha-813000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m02 "sudo cat /home/docker/cp-test_ha-813000_ha-813000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp ha-813000:/home/docker/cp-test.txt ha-813000-m03:/home/docker/cp-test_ha-813000_ha-813000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m03 "sudo cat /home/docker/cp-test_ha-813000_ha-813000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp ha-813000:/home/docker/cp-test.txt ha-813000-m04:/home/docker/cp-test_ha-813000_ha-813000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m04 "sudo cat /home/docker/cp-test_ha-813000_ha-813000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp testdata/cp-test.txt ha-813000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp ha-813000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile576404734/001/cp-test_ha-813000-m02.txt
E0925 11:52:22.615090    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
E0925 11:52:22.622691    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m02 "sudo cat /home/docker/cp-test.txt"
E0925 11:52:22.634406    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
E0925 11:52:22.656661    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp ha-813000-m02:/home/docker/cp-test.txt ha-813000:/home/docker/cp-test_ha-813000-m02_ha-813000.txt
E0925 11:52:22.699479    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
E0925 11:52:22.782336    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000 "sudo cat /home/docker/cp-test_ha-813000-m02_ha-813000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp ha-813000-m02:/home/docker/cp-test.txt ha-813000-m03:/home/docker/cp-test_ha-813000-m02_ha-813000-m03.txt
E0925 11:52:22.943960    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m03 "sudo cat /home/docker/cp-test_ha-813000-m02_ha-813000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp ha-813000-m02:/home/docker/cp-test.txt ha-813000-m04:/home/docker/cp-test_ha-813000-m02_ha-813000-m04.txt
E0925 11:52:23.267447    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m04 "sudo cat /home/docker/cp-test_ha-813000-m02_ha-813000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp testdata/cp-test.txt ha-813000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp ha-813000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile576404734/001/cp-test_ha-813000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp ha-813000-m03:/home/docker/cp-test.txt ha-813000:/home/docker/cp-test_ha-813000-m03_ha-813000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000 "sudo cat /home/docker/cp-test_ha-813000-m03_ha-813000.txt"
E0925 11:52:23.909328    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp ha-813000-m03:/home/docker/cp-test.txt ha-813000-m02:/home/docker/cp-test_ha-813000-m03_ha-813000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m02 "sudo cat /home/docker/cp-test_ha-813000-m03_ha-813000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp ha-813000-m03:/home/docker/cp-test.txt ha-813000-m04:/home/docker/cp-test_ha-813000-m03_ha-813000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m04 "sudo cat /home/docker/cp-test_ha-813000-m03_ha-813000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp testdata/cp-test.txt ha-813000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp ha-813000-m04:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile576404734/001/cp-test_ha-813000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp ha-813000-m04:/home/docker/cp-test.txt ha-813000:/home/docker/cp-test_ha-813000-m04_ha-813000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000 "sudo cat /home/docker/cp-test_ha-813000-m04_ha-813000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp ha-813000-m04:/home/docker/cp-test.txt ha-813000-m02:/home/docker/cp-test_ha-813000-m04_ha-813000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m04 "sudo cat /home/docker/cp-test.txt"
E0925 11:52:25.190786    1934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19681-1412/.minikube/profiles/functional-251000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m02 "sudo cat /home/docker/cp-test_ha-813000-m04_ha-813000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 cp ha-813000-m04:/home/docker/cp-test.txt ha-813000-m03:/home/docker/cp-test_ha-813000-m04_ha-813000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-813000 ssh -n ha-813000-m03 "sudo cat /home/docker/cp-test_ha-813000-m04_ha-813000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (75.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m15.094292334s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (75.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-457000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-457000 --output=json --user=testUser: (3.830548042s)
--- PASS: TestJSONOutput/stop/Command (3.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-709000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-709000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (91.849416ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"761e0428-c8f6-4827-9dc0-3c7976ad93ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-709000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c8428c1-1ccb-4ac3-a4d1-6bb5850bb02b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19681"}}
	{"specversion":"1.0","id":"ad05ca48-6586-43b9-a558-66449fce6caa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig"}}
	{"specversion":"1.0","id":"d2c007d4-6421-4bd8-be73-fc60f9f90ba4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"1fd14247-f88e-440a-a8bc-740e750bd4c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b548ac9e-f87b-4da3-b7c6-2f3260e64fa0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube"}}
	{"specversion":"1.0","id":"9259e09e-25a4-4ef8-966a-9a0d8b4c2a45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7e7fd85f-bec8-4e27-8ab7-41b66eafd98a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-709000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-709000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-078000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-078000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (98.632625ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-078000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19681
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19681-1412/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19681-1412/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-078000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-078000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.578875ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-078000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-078000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.763408208s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.692551583s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-078000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-078000: (3.390627458s)
--- PASS: TestNoKubernetes/serial/Stop (3.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-078000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-078000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.475834ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-078000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-078000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-814000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-473000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-473000 --alsologtostderr -v=3: (3.6080585s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-473000 -n old-k8s-version-473000: exit status 7 (57.827167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-473000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-690000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-690000 --alsologtostderr -v=3: (3.193476709s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-404000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-404000 --alsologtostderr -v=3: (3.892729459s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-690000 -n no-preload-690000: exit status 7 (57.432416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-690000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000: exit status 7 (55.833167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-404000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-022000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-022000 --alsologtostderr -v=3: (3.185967584s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-554000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-554000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-554000 --alsologtostderr -v=3: (3.666141458s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-022000 -n default-k8s-diff-port-022000: exit status 7 (34.097166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-022000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-554000 -n newest-cni-554000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-554000 -n newest-cni-554000: exit status 7 (54.203375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-554000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (20/273)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-811000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                
----------------------- debugLogs end: cilium-811000 [took: 2.195073583s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-811000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-811000
--- SKIP: TestNetworkPlugins/group/cilium (2.30s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-164000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-164000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard