Test Report: QEMU_macOS 18241

                    
                      51610bcb4030010c42e994a5dfa0c2b02e4dd273:2024-03-07:33452
                    
                

Test fail (98/281)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 39.71
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.97
39 TestAddons/parallel/Ingress 35.68
54 TestCertOptions 10.09
55 TestCertExpiration 195.27
56 TestDockerFlags 10.01
57 TestForceSystemdFlag 9.96
58 TestForceSystemdEnv 10.09
103 TestFunctional/parallel/ServiceCmdConnect 30.97
175 TestMutliControlPlane/serial/StopSecondaryNode 214.15
176 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 104.68
177 TestMutliControlPlane/serial/RestartSecondaryNode 208.9
179 TestMutliControlPlane/serial/RestartClusterKeepsNodes 234.39
180 TestMutliControlPlane/serial/DeleteSecondaryNode 0.11
181 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.1
182 TestMutliControlPlane/serial/StopCluster 202.07
183 TestMutliControlPlane/serial/RestartCluster 5.25
184 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.11
185 TestMutliControlPlane/serial/AddSecondaryNode 0.08
189 TestImageBuild/serial/Setup 10.09
192 TestJSONOutput/start/Command 9.79
198 TestJSONOutput/pause/Command 0.08
204 TestJSONOutput/unpause/Command 0.05
221 TestMinikubeProfile 10.33
224 TestMountStart/serial/StartWithMountFirst 10.62
227 TestMultiNode/serial/FreshStart2Nodes 9.83
228 TestMultiNode/serial/DeployApp2Nodes 106.83
229 TestMultiNode/serial/PingHostFrom2Pods 0.09
230 TestMultiNode/serial/AddNode 0.08
231 TestMultiNode/serial/MultiNodeLabels 0.56
232 TestMultiNode/serial/ProfileList 0.1
233 TestMultiNode/serial/CopyFile 0.06
234 TestMultiNode/serial/StopNode 0.14
235 TestMultiNode/serial/StartAfterStop 51.36
236 TestMultiNode/serial/RestartKeepsNodes 9.21
237 TestMultiNode/serial/DeleteNode 0.1
238 TestMultiNode/serial/StopMultiNode 3.5
239 TestMultiNode/serial/RestartMultiNode 5.25
240 TestMultiNode/serial/ValidateNameConflict 20
244 TestPreload 10.01
246 TestScheduledStopUnix 10.15
247 TestSkaffold 16.55
250 TestRunningBinaryUpgrade 627.17
252 TestKubernetesUpgrade 18.7
265 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.73
266 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.21
268 TestStoppedBinaryUpgrade/Upgrade 580.56
270 TestPause/serial/Start 9.97
280 TestNoKubernetes/serial/StartWithK8s 10.02
281 TestNoKubernetes/serial/StartWithStopK8s 5.92
282 TestNoKubernetes/serial/Start 5.89
286 TestNoKubernetes/serial/StartNoArgs 5.95
288 TestNetworkPlugins/group/auto/Start 9.76
289 TestNetworkPlugins/group/kindnet/Start 9.81
290 TestNetworkPlugins/group/calico/Start 9.87
291 TestNetworkPlugins/group/custom-flannel/Start 9.82
292 TestNetworkPlugins/group/false/Start 9.89
293 TestNetworkPlugins/group/enable-default-cni/Start 9.78
294 TestNetworkPlugins/group/flannel/Start 9.77
296 TestNetworkPlugins/group/bridge/Start 9.77
297 TestNetworkPlugins/group/kubenet/Start 9.91
299 TestStartStop/group/old-k8s-version/serial/FirstStart 9.84
301 TestStartStop/group/no-preload/serial/FirstStart 11.69
302 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
303 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.13
306 TestStartStop/group/old-k8s-version/serial/SecondStart 5.86
307 TestStartStop/group/no-preload/serial/DeployApp 0.1
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
310 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.13
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.09
312 TestStartStop/group/old-k8s-version/serial/Pause 0.11
315 TestStartStop/group/embed-certs/serial/FirstStart 10.14
317 TestStartStop/group/no-preload/serial/SecondStart 6.86
318 TestStartStop/group/embed-certs/serial/DeployApp 0.1
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.13
322 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.09
323 TestStartStop/group/no-preload/serial/Pause 0.12
326 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.05
328 TestStartStop/group/embed-certs/serial/SecondStart 7.09
329 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.13
333 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.09
334 TestStartStop/group/embed-certs/serial/Pause 0.12
337 TestStartStop/group/newest-cni/serial/FirstStart 9.95
339 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.56
342 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
343 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.07
345 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
346 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
348 TestStartStop/group/newest-cni/serial/SecondStart 5.26
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
352 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (39.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-996000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-996000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (39.710065208s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d8222a5e-49ff-4d09-8263-ab1cae34e94a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-996000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec5e3b64-54a5-4e93-a25b-9db6c7681470","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18241"}}
	{"specversion":"1.0","id":"cb2e6ac9-5211-40b8-9567-d0ba5240373e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig"}}
	{"specversion":"1.0","id":"5831e6ac-5658-4497-ae94-467bd9f1a247","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"961083b0-a959-44e5-aa96-c40c8aabfeec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b029b1cd-12d8-4bcc-b15c-ea56a9709869","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube"}}
	{"specversion":"1.0","id":"5c977088-528a-4edd-b661-61fb6aeb0072","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"d3537230-852f-4870-84ab-823212b64dbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ccd5d058-d3ab-45ab-b212-9f61eabc8f7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"9430de1b-3dfd-43fd-bca4-b9b4cddc9083","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a472867-e643-48a2-90ed-6f1f4154fd3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-996000\" primary control-plane node in \"download-only-996000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c49a47c-48dc-4da5-935f-d8b67d988e56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"34f15305-19c2-43ae-ac44-6f90a6341422","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10686f0a0 0x10686f0a0 0x10686f0a0 0x10686f0a0 0x10686f0a0 0x10686f0a0 0x10686f0a0] Decompressors:map[bz2:0x140004a93b0 gz:0x140004a93b8 tar:0x140004a9360 tar.bz2:0x140004a9370 tar.gz:0x140004a9380 tar.xz:0x140004a9390 tar.zst:0x140004a93a0 tbz2:0x140004a9370 tgz:0x14
0004a9380 txz:0x140004a9390 tzst:0x140004a93a0 xz:0x140004a93c0 zip:0x140004a93d0 zst:0x140004a93c8] Getters:map[file:0x14002602620 http:0x14000178960 https:0x140001789b0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"12b2fe99-a7c9-43e8-a7d8-936ce0c2c032","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 09:28:56.448663    1783 out.go:291] Setting OutFile to fd 1 ...
	I0307 09:28:56.448798    1783 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:28:56.448801    1783 out.go:304] Setting ErrFile to fd 2...
	I0307 09:28:56.448803    1783 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:28:56.448934    1783 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	W0307 09:28:56.449023    1783 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18241-1349/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18241-1349/.minikube/config/config.json: no such file or directory
	I0307 09:28:56.450235    1783 out.go:298] Setting JSON to true
	I0307 09:28:56.467946    1783 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1708,"bootTime":1709830828,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 09:28:56.468004    1783 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 09:28:56.473248    1783 out.go:97] [download-only-996000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 09:28:56.477122    1783 out.go:169] MINIKUBE_LOCATION=18241
	I0307 09:28:56.473391    1783 notify.go:220] Checking for updates...
	W0307 09:28:56.473421    1783 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball: no such file or directory
	I0307 09:28:56.484992    1783 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 09:28:56.489264    1783 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 09:28:56.496021    1783 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 09:28:56.499170    1783 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	W0307 09:28:56.505134    1783 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 09:28:56.505331    1783 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 09:28:56.509122    1783 out.go:97] Using the qemu2 driver based on user configuration
	I0307 09:28:56.509144    1783 start.go:297] selected driver: qemu2
	I0307 09:28:56.509161    1783 start.go:901] validating driver "qemu2" against <nil>
	I0307 09:28:56.509236    1783 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 09:28:56.512188    1783 out.go:169] Automatically selected the socket_vmnet network
	I0307 09:28:56.518111    1783 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0307 09:28:56.518211    1783 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 09:28:56.518312    1783 cni.go:84] Creating CNI manager for ""
	I0307 09:28:56.518330    1783 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0307 09:28:56.518382    1783 start.go:340] cluster config:
	{Name:download-only-996000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-996000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 09:28:56.522875    1783 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 09:28:56.527151    1783 out.go:97] Downloading VM boot image ...
	I0307 09:28:56.527172    1783 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso
	I0307 09:29:14.599321    1783 out.go:97] Starting "download-only-996000" primary control-plane node in "download-only-996000" cluster
	I0307 09:29:14.599367    1783 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 09:29:14.871643    1783 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 09:29:14.871762    1783 cache.go:56] Caching tarball of preloaded images
	I0307 09:29:14.872537    1783 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 09:29:14.878059    1783 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0307 09:29:14.878118    1783 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0307 09:29:15.466596    1783 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 09:29:34.639957    1783 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0307 09:29:34.640139    1783 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0307 09:29:35.366876    1783 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0307 09:29:35.367062    1783 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/download-only-996000/config.json ...
	I0307 09:29:35.367081    1783 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/download-only-996000/config.json: {Name:mk96b11f02051e864ff39bad632d46a942eba181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 09:29:35.367324    1783 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 09:29:35.367501    1783 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0307 09:29:36.073727    1783 out.go:169] 
	W0307 09:29:36.077780    1783 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10686f0a0 0x10686f0a0 0x10686f0a0 0x10686f0a0 0x10686f0a0 0x10686f0a0 0x10686f0a0] Decompressors:map[bz2:0x140004a93b0 gz:0x140004a93b8 tar:0x140004a9360 tar.bz2:0x140004a9370 tar.gz:0x140004a9380 tar.xz:0x140004a9390 tar.zst:0x140004a93a0 tbz2:0x140004a9370 tgz:0x140004a9380 txz:0x140004a9390 tzst:0x140004a93a0 xz:0x140004a93c0 zip:0x140004a93d0 zst:0x140004a93c8] Getters:map[file:0x14002602620 http:0x14000178960 https:0x140001789b0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0307 09:29:36.077813    1783 out_reason.go:110] 
	W0307 09:29:36.084749    1783 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 09:29:36.088747    1783 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-996000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (39.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.97s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-557000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-557000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.792563667s)

                                                
                                                
-- stdout --
	* [offline-docker-557000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-557000" primary control-plane node in "offline-docker-557000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-557000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:09:45.485141    3918 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:09:45.485305    3918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:09:45.485311    3918 out.go:304] Setting ErrFile to fd 2...
	I0307 10:09:45.485314    3918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:09:45.485440    3918 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:09:45.486621    3918 out.go:298] Setting JSON to false
	I0307 10:09:45.504367    3918 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4157,"bootTime":1709830828,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:09:45.504497    3918 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:09:45.509319    3918 out.go:177] * [offline-docker-557000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:09:45.517296    3918 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:09:45.520411    3918 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:09:45.517323    3918 notify.go:220] Checking for updates...
	I0307 10:09:45.523388    3918 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:09:45.524594    3918 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:09:45.527356    3918 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:09:45.530382    3918 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:09:45.533793    3918 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:09:45.533847    3918 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:09:45.538333    3918 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:09:45.545377    3918 start.go:297] selected driver: qemu2
	I0307 10:09:45.545395    3918 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:09:45.545402    3918 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:09:45.547395    3918 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:09:45.550357    3918 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:09:45.553436    3918 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:09:45.553480    3918 cni.go:84] Creating CNI manager for ""
	I0307 10:09:45.553489    3918 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:09:45.553493    3918 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 10:09:45.553527    3918 start.go:340] cluster config:
	{Name:offline-docker-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:09:45.557900    3918 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:09:45.565386    3918 out.go:177] * Starting "offline-docker-557000" primary control-plane node in "offline-docker-557000" cluster
	I0307 10:09:45.569426    3918 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:09:45.569453    3918 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:09:45.569462    3918 cache.go:56] Caching tarball of preloaded images
	I0307 10:09:45.569534    3918 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:09:45.569540    3918 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:09:45.569602    3918 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/offline-docker-557000/config.json ...
	I0307 10:09:45.569612    3918 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/offline-docker-557000/config.json: {Name:mkfb4a282ef67d15132c910724c4c9cbbcba52b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:09:45.569900    3918 start.go:360] acquireMachinesLock for offline-docker-557000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:09:45.569931    3918 start.go:364] duration metric: took 23.417µs to acquireMachinesLock for "offline-docker-557000"
	I0307 10:09:45.569944    3918 start.go:93] Provisioning new machine with config: &{Name:offline-docker-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:09:45.569975    3918 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:09:45.577387    3918 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 10:09:45.592932    3918 start.go:159] libmachine.API.Create for "offline-docker-557000" (driver="qemu2")
	I0307 10:09:45.592967    3918 client.go:168] LocalClient.Create starting
	I0307 10:09:45.593042    3918 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:09:45.593074    3918 main.go:141] libmachine: Decoding PEM data...
	I0307 10:09:45.593085    3918 main.go:141] libmachine: Parsing certificate...
	I0307 10:09:45.593134    3918 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:09:45.593155    3918 main.go:141] libmachine: Decoding PEM data...
	I0307 10:09:45.593161    3918 main.go:141] libmachine: Parsing certificate...
	I0307 10:09:45.593495    3918 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:09:45.733545    3918 main.go:141] libmachine: Creating SSH key...
	I0307 10:09:45.848891    3918 main.go:141] libmachine: Creating Disk image...
	I0307 10:09:45.848903    3918 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:09:45.849079    3918 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/offline-docker-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/offline-docker-557000/disk.qcow2
	I0307 10:09:45.862819    3918 main.go:141] libmachine: STDOUT: 
	I0307 10:09:45.862845    3918 main.go:141] libmachine: STDERR: 
	I0307 10:09:45.862944    3918 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/offline-docker-557000/disk.qcow2 +20000M
	I0307 10:09:45.877523    3918 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:09:45.877542    3918 main.go:141] libmachine: STDERR: 
	I0307 10:09:45.877559    3918 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/offline-docker-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/offline-docker-557000/disk.qcow2
	I0307 10:09:45.877564    3918 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:09:45.877602    3918 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/offline-docker-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/offline-docker-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/offline-docker-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:ad:00:66:ca:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/offline-docker-557000/disk.qcow2
	I0307 10:09:45.879391    3918 main.go:141] libmachine: STDOUT: 
	I0307 10:09:45.879411    3918 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:09:45.879431    3918 client.go:171] duration metric: took 286.467542ms to LocalClient.Create
	I0307 10:09:47.880443    3918 start.go:128] duration metric: took 2.310532083s to createHost
	I0307 10:09:47.880466    3918 start.go:83] releasing machines lock for "offline-docker-557000", held for 2.310607333s
	W0307 10:09:47.880476    3918 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:09:47.888922    3918 out.go:177] * Deleting "offline-docker-557000" in qemu2 ...
	W0307 10:09:47.897801    3918 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:09:47.897812    3918 start.go:728] Will try again in 5 seconds ...
	I0307 10:09:52.899740    3918 start.go:360] acquireMachinesLock for offline-docker-557000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:09:52.899841    3918 start.go:364] duration metric: took 78.667µs to acquireMachinesLock for "offline-docker-557000"
	I0307 10:09:52.899872    3918 start.go:93] Provisioning new machine with config: &{Name:offline-docker-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:09:52.899946    3918 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:09:52.908614    3918 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 10:09:52.929696    3918 start.go:159] libmachine.API.Create for "offline-docker-557000" (driver="qemu2")
	I0307 10:09:52.929721    3918 client.go:168] LocalClient.Create starting
	I0307 10:09:52.929795    3918 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:09:52.929839    3918 main.go:141] libmachine: Decoding PEM data...
	I0307 10:09:52.929849    3918 main.go:141] libmachine: Parsing certificate...
	I0307 10:09:52.929888    3918 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:09:52.929913    3918 main.go:141] libmachine: Decoding PEM data...
	I0307 10:09:52.929924    3918 main.go:141] libmachine: Parsing certificate...
	I0307 10:09:52.930274    3918 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:09:53.070874    3918 main.go:141] libmachine: Creating SSH key...
	I0307 10:09:53.168568    3918 main.go:141] libmachine: Creating Disk image...
	I0307 10:09:53.168574    3918 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:09:53.168737    3918 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/offline-docker-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/offline-docker-557000/disk.qcow2
	I0307 10:09:53.181136    3918 main.go:141] libmachine: STDOUT: 
	I0307 10:09:53.181158    3918 main.go:141] libmachine: STDERR: 
	I0307 10:09:53.181212    3918 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/offline-docker-557000/disk.qcow2 +20000M
	I0307 10:09:53.191855    3918 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:09:53.191872    3918 main.go:141] libmachine: STDERR: 
	I0307 10:09:53.191889    3918 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/offline-docker-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/offline-docker-557000/disk.qcow2
	I0307 10:09:53.191894    3918 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:09:53.191924    3918 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/offline-docker-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/offline-docker-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/offline-docker-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:d9:e2:ee:dd:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/offline-docker-557000/disk.qcow2
	I0307 10:09:53.193550    3918 main.go:141] libmachine: STDOUT: 
	I0307 10:09:53.193566    3918 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:09:53.193579    3918 client.go:171] duration metric: took 263.863042ms to LocalClient.Create
	I0307 10:09:55.195727    3918 start.go:128] duration metric: took 2.295823458s to createHost
	I0307 10:09:55.195824    3918 start.go:83] releasing machines lock for "offline-docker-557000", held for 2.2960465s
	W0307 10:09:55.196248    3918 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:09:55.210821    3918 out.go:177] 
	W0307 10:09:55.215001    3918 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:09:55.215056    3918 out.go:239] * 
	* 
	W0307 10:09:55.217491    3918 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:09:55.231028    3918 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-557000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-03-07 10:09:55.246273 -0800 PST m=+2459.068496501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-557000 -n offline-docker-557000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-557000 -n offline-docker-557000: exit status 7 (69.362125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-557000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-557000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-557000
--- FAIL: TestOffline (9.97s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (35.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-040000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-040000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-040000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [86f0c9c5-f243-4279-b6b1-340566b4ce31] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [86f0c9c5-f243-4279-b6b1-340566b4ce31] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004042708s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-arm64 -p addons-040000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-040000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-arm64 -p addons-040000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.2: exit status 1 (15.023821792s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-darwin-arm64 -p addons-040000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-darwin-arm64 -p addons-040000 addons disable ingress-dns --alsologtostderr -v=1: (1.041688083s)
addons_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 -p addons-040000 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-darwin-arm64 -p addons-040000 addons disable ingress --alsologtostderr -v=1: (7.222824042s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-040000 -n addons-040000
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-040000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 07 Mar 24 09:30 PST | 07 Mar 24 09:30 PST |
	| delete  | -p download-only-307000                                                                     | download-only-307000 | jenkins | v1.32.0 | 07 Mar 24 09:30 PST | 07 Mar 24 09:30 PST |
	| start   | -o=json --download-only                                                                     | download-only-861000 | jenkins | v1.32.0 | 07 Mar 24 09:30 PST |                     |
	|         | -p download-only-861000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 07 Mar 24 09:30 PST | 07 Mar 24 09:30 PST |
	| delete  | -p download-only-861000                                                                     | download-only-861000 | jenkins | v1.32.0 | 07 Mar 24 09:30 PST | 07 Mar 24 09:30 PST |
	| delete  | -p download-only-996000                                                                     | download-only-996000 | jenkins | v1.32.0 | 07 Mar 24 09:30 PST | 07 Mar 24 09:30 PST |
	| delete  | -p download-only-307000                                                                     | download-only-307000 | jenkins | v1.32.0 | 07 Mar 24 09:30 PST | 07 Mar 24 09:30 PST |
	| delete  | -p download-only-861000                                                                     | download-only-861000 | jenkins | v1.32.0 | 07 Mar 24 09:30 PST | 07 Mar 24 09:30 PST |
	| start   | --download-only -p                                                                          | binary-mirror-371000 | jenkins | v1.32.0 | 07 Mar 24 09:30 PST |                     |
	|         | binary-mirror-371000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49328                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-371000                                                                     | binary-mirror-371000 | jenkins | v1.32.0 | 07 Mar 24 09:30 PST | 07 Mar 24 09:30 PST |
	| addons  | enable dashboard -p                                                                         | addons-040000        | jenkins | v1.32.0 | 07 Mar 24 09:30 PST |                     |
	|         | addons-040000                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-040000        | jenkins | v1.32.0 | 07 Mar 24 09:30 PST |                     |
	|         | addons-040000                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-040000 --wait=true                                                                | addons-040000        | jenkins | v1.32.0 | 07 Mar 24 09:30 PST | 07 Mar 24 09:34 PST |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                                                |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                                           |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| ip      | addons-040000 ip                                                                            | addons-040000        | jenkins | v1.32.0 | 07 Mar 24 09:34 PST | 07 Mar 24 09:34 PST |
	| addons  | addons-040000 addons disable                                                                | addons-040000        | jenkins | v1.32.0 | 07 Mar 24 09:34 PST | 07 Mar 24 09:34 PST |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-040000 addons                                                                        | addons-040000        | jenkins | v1.32.0 | 07 Mar 24 09:34 PST | 07 Mar 24 09:34 PST |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-040000        | jenkins | v1.32.0 | 07 Mar 24 09:34 PST | 07 Mar 24 09:34 PST |
	|         | addons-040000                                                                               |                      |         |         |                     |                     |
	| addons  | addons-040000 addons                                                                        | addons-040000        | jenkins | v1.32.0 | 07 Mar 24 09:34 PST | 07 Mar 24 09:34 PST |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-040000 ssh curl -s                                                                   | addons-040000        | jenkins | v1.32.0 | 07 Mar 24 09:34 PST | 07 Mar 24 09:34 PST |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-040000 ip                                                                            | addons-040000        | jenkins | v1.32.0 | 07 Mar 24 09:34 PST | 07 Mar 24 09:34 PST |
	| addons  | addons-040000 addons                                                                        | addons-040000        | jenkins | v1.32.0 | 07 Mar 24 09:34 PST | 07 Mar 24 09:34 PST |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-040000 addons disable                                                                | addons-040000        | jenkins | v1.32.0 | 07 Mar 24 09:35 PST | 07 Mar 24 09:35 PST |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-040000 addons disable                                                                | addons-040000        | jenkins | v1.32.0 | 07 Mar 24 09:35 PST | 07 Mar 24 09:35 PST |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| ssh     | addons-040000 ssh cat                                                                       | addons-040000        | jenkins | v1.32.0 | 07 Mar 24 09:35 PST | 07 Mar 24 09:35 PST |
	|         | /opt/local-path-provisioner/pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-040000 addons disable                                                                | addons-040000        | jenkins | v1.32.0 | 07 Mar 24 09:35 PST |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 09:30:42
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 09:30:42.699754    1980 out.go:291] Setting OutFile to fd 1 ...
	I0307 09:30:42.699868    1980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:30:42.699871    1980 out.go:304] Setting ErrFile to fd 2...
	I0307 09:30:42.699880    1980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:30:42.699997    1980 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 09:30:42.701086    1980 out.go:298] Setting JSON to false
	I0307 09:30:42.717299    1980 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1814,"bootTime":1709830828,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 09:30:42.717363    1980 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 09:30:42.721561    1980 out.go:177] * [addons-040000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 09:30:42.728453    1980 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 09:30:42.728519    1980 notify.go:220] Checking for updates...
	I0307 09:30:42.735442    1980 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 09:30:42.738378    1980 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 09:30:42.741454    1980 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 09:30:42.744441    1980 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 09:30:42.747471    1980 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 09:30:42.750548    1980 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 09:30:42.754452    1980 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 09:30:42.761435    1980 start.go:297] selected driver: qemu2
	I0307 09:30:42.761440    1980 start.go:901] validating driver "qemu2" against <nil>
	I0307 09:30:42.761445    1980 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 09:30:42.763552    1980 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 09:30:42.766424    1980 out.go:177] * Automatically selected the socket_vmnet network
	I0307 09:30:42.769463    1980 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 09:30:42.769515    1980 cni.go:84] Creating CNI manager for ""
	I0307 09:30:42.769525    1980 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 09:30:42.769534    1980 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 09:30:42.769574    1980 start.go:340] cluster config:
	{Name:addons-040000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-040000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 09:30:42.773991    1980 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 09:30:42.782444    1980 out.go:177] * Starting "addons-040000" primary control-plane node in "addons-040000" cluster
	I0307 09:30:42.786399    1980 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 09:30:42.786415    1980 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 09:30:42.786429    1980 cache.go:56] Caching tarball of preloaded images
	I0307 09:30:42.786506    1980 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 09:30:42.786518    1980 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 09:30:42.786745    1980 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/config.json ...
	I0307 09:30:42.786757    1980 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/config.json: {Name:mkb5fd3f64f4e6a1895730a522447b41ecc9ee3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 09:30:42.786976    1980 start.go:360] acquireMachinesLock for addons-040000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 09:30:42.787120    1980 start.go:364] duration metric: took 138.083µs to acquireMachinesLock for "addons-040000"
	I0307 09:30:42.787131    1980 start.go:93] Provisioning new machine with config: &{Name:addons-040000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:addons-040000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 09:30:42.787171    1980 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 09:30:42.791259    1980 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0307 09:30:43.724661    1980 start.go:159] libmachine.API.Create for "addons-040000" (driver="qemu2")
	I0307 09:30:43.724730    1980 client.go:168] LocalClient.Create starting
	I0307 09:30:43.724959    1980 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 09:30:43.879773    1980 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 09:30:43.961430    1980 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 09:30:44.628200    1980 main.go:141] libmachine: Creating SSH key...
	I0307 09:30:44.743720    1980 main.go:141] libmachine: Creating Disk image...
	I0307 09:30:44.743729    1980 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 09:30:44.743944    1980 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/disk.qcow2
	I0307 09:30:44.831764    1980 main.go:141] libmachine: STDOUT: 
	I0307 09:30:44.831797    1980 main.go:141] libmachine: STDERR: 
	I0307 09:30:44.831876    1980 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/disk.qcow2 +20000M
	I0307 09:30:44.844583    1980 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 09:30:44.844601    1980 main.go:141] libmachine: STDERR: 
	I0307 09:30:44.844620    1980 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/disk.qcow2
	I0307 09:30:44.844627    1980 main.go:141] libmachine: Starting QEMU VM...
	I0307 09:30:44.844660    1980 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:67:5f:49:3a:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/disk.qcow2
	I0307 09:30:44.897307    1980 main.go:141] libmachine: STDOUT: 
	I0307 09:30:44.897349    1980 main.go:141] libmachine: STDERR: 
	I0307 09:30:44.897354    1980 main.go:141] libmachine: Attempt 0
	I0307 09:30:44.897363    1980 main.go:141] libmachine: Searching for 46:67:5f:49:3a:bd in /var/db/dhcpd_leases ...
	I0307 09:30:44.897567    1980 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 09:30:44.897588    1980 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65eb4abe}
	I0307 09:30:46.899725    1980 main.go:141] libmachine: Attempt 1
	I0307 09:30:46.899810    1980 main.go:141] libmachine: Searching for 46:67:5f:49:3a:bd in /var/db/dhcpd_leases ...
	I0307 09:30:46.900128    1980 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 09:30:46.900178    1980 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65eb4abe}
	I0307 09:30:48.900485    1980 main.go:141] libmachine: Attempt 2
	I0307 09:30:48.900615    1980 main.go:141] libmachine: Searching for 46:67:5f:49:3a:bd in /var/db/dhcpd_leases ...
	I0307 09:30:48.900993    1980 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 09:30:48.901055    1980 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65eb4abe}
	I0307 09:30:50.903220    1980 main.go:141] libmachine: Attempt 3
	I0307 09:30:50.903318    1980 main.go:141] libmachine: Searching for 46:67:5f:49:3a:bd in /var/db/dhcpd_leases ...
	I0307 09:30:50.903377    1980 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 09:30:50.903397    1980 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65eb4abe}
	I0307 09:30:52.905420    1980 main.go:141] libmachine: Attempt 4
	I0307 09:30:52.905427    1980 main.go:141] libmachine: Searching for 46:67:5f:49:3a:bd in /var/db/dhcpd_leases ...
	I0307 09:30:52.905451    1980 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 09:30:52.905456    1980 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65eb4abe}
	I0307 09:30:54.907464    1980 main.go:141] libmachine: Attempt 5
	I0307 09:30:54.907472    1980 main.go:141] libmachine: Searching for 46:67:5f:49:3a:bd in /var/db/dhcpd_leases ...
	I0307 09:30:54.907499    1980 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 09:30:54.907505    1980 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65eb4abe}
	I0307 09:30:56.909549    1980 main.go:141] libmachine: Attempt 6
	I0307 09:30:56.909577    1980 main.go:141] libmachine: Searching for 46:67:5f:49:3a:bd in /var/db/dhcpd_leases ...
	I0307 09:30:56.909638    1980 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0307 09:30:56.909648    1980 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65eb4abe}
	I0307 09:30:58.911816    1980 main.go:141] libmachine: Attempt 7
	I0307 09:30:58.911924    1980 main.go:141] libmachine: Searching for 46:67:5f:49:3a:bd in /var/db/dhcpd_leases ...
	I0307 09:30:58.912258    1980 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0307 09:30:58.912453    1980 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:46:67:5f:49:3a:bd ID:1,46:67:5f:49:3a:bd Lease:0x65eb4b51}
	I0307 09:30:58.912475    1980 main.go:141] libmachine: Found match: 46:67:5f:49:3a:bd
	I0307 09:30:58.912513    1980 main.go:141] libmachine: IP: 192.168.105.2
	I0307 09:30:58.912536    1980 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0307 09:31:01.933634    1980 machine.go:94] provisionDockerMachine start ...
	I0307 09:31:01.934382    1980 main.go:141] libmachine: Using SSH client type: native
	I0307 09:31:01.935026    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ba9a30] 0x100bac290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 09:31:01.935043    1980 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 09:31:01.998219    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0307 09:31:01.998278    1980 buildroot.go:166] provisioning hostname "addons-040000"
	I0307 09:31:01.998392    1980 main.go:141] libmachine: Using SSH client type: native
	I0307 09:31:01.998645    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ba9a30] 0x100bac290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 09:31:01.998655    1980 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-040000 && echo "addons-040000" | sudo tee /etc/hostname
	I0307 09:31:02.056246    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-040000
	
	I0307 09:31:02.056335    1980 main.go:141] libmachine: Using SSH client type: native
	I0307 09:31:02.056524    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ba9a30] 0x100bac290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 09:31:02.056534    1980 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-040000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-040000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-040000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 09:31:02.103342    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 09:31:02.103353    1980 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18241-1349/.minikube CaCertPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18241-1349/.minikube}
	I0307 09:31:02.103366    1980 buildroot.go:174] setting up certificates
	I0307 09:31:02.103371    1980 provision.go:84] configureAuth start
	I0307 09:31:02.103377    1980 provision.go:143] copyHostCerts
	I0307 09:31:02.103487    1980 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18241-1349/.minikube/cert.pem (1123 bytes)
	I0307 09:31:02.104486    1980 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18241-1349/.minikube/key.pem (1679 bytes)
	I0307 09:31:02.104935    1980 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.pem (1078 bytes)
	I0307 09:31:02.105230    1980 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca-key.pem org=jenkins.addons-040000 san=[127.0.0.1 192.168.105.2 addons-040000 localhost minikube]
	I0307 09:31:02.183418    1980 provision.go:177] copyRemoteCerts
	I0307 09:31:02.183468    1980 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 09:31:02.183485    1980 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/id_rsa Username:docker}
	I0307 09:31:02.208555    1980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 09:31:02.216723    1980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0307 09:31:02.224889    1980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 09:31:02.233043    1980 provision.go:87] duration metric: took 129.66675ms to configureAuth
	I0307 09:31:02.233052    1980 buildroot.go:189] setting minikube options for container-runtime
	I0307 09:31:02.233351    1980 config.go:182] Loaded profile config "addons-040000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 09:31:02.233387    1980 main.go:141] libmachine: Using SSH client type: native
	I0307 09:31:02.233478    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ba9a30] 0x100bac290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 09:31:02.233483    1980 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 09:31:02.275574    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 09:31:02.275584    1980 buildroot.go:70] root file system type: tmpfs
	I0307 09:31:02.275634    1980 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 09:31:02.275675    1980 main.go:141] libmachine: Using SSH client type: native
	I0307 09:31:02.275771    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ba9a30] 0x100bac290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 09:31:02.275807    1980 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 09:31:02.321335    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 09:31:02.321384    1980 main.go:141] libmachine: Using SSH client type: native
	I0307 09:31:02.321484    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ba9a30] 0x100bac290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 09:31:02.321492    1980 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 09:31:02.637853    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 09:31:02.637867    1980 machine.go:97] duration metric: took 704.210542ms to provisionDockerMachine
	I0307 09:31:02.637872    1980 client.go:171] duration metric: took 18.913356458s to LocalClient.Create
	I0307 09:31:02.637892    1980 start.go:167] duration metric: took 18.913458833s to libmachine.API.Create "addons-040000"
	I0307 09:31:02.637896    1980 start.go:293] postStartSetup for "addons-040000" (driver="qemu2")
	I0307 09:31:02.637901    1980 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 09:31:02.637973    1980 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 09:31:02.637983    1980 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/id_rsa Username:docker}
	I0307 09:31:02.662108    1980 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 09:31:02.663461    1980 info.go:137] Remote host: Buildroot 2023.02.9
	I0307 09:31:02.663470    1980 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18241-1349/.minikube/addons for local assets ...
	I0307 09:31:02.663543    1980 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18241-1349/.minikube/files for local assets ...
	I0307 09:31:02.663572    1980 start.go:296] duration metric: took 25.674333ms for postStartSetup
	I0307 09:31:02.663926    1980 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/config.json ...
	I0307 09:31:02.664094    1980 start.go:128] duration metric: took 19.877149042s to createHost
	I0307 09:31:02.664116    1980 main.go:141] libmachine: Using SSH client type: native
	I0307 09:31:02.664196    1980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ba9a30] 0x100bac290 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0307 09:31:02.664200    1980 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0307 09:31:02.706675    1980 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709832662.590848502
	
	I0307 09:31:02.707296    1980 fix.go:216] guest clock: 1709832662.590848502
	I0307 09:31:02.707300    1980 fix.go:229] Guest: 2024-03-07 09:31:02.590848502 -0800 PST Remote: 2024-03-07 09:31:02.664097 -0800 PST m=+19.985634792 (delta=-73.248498ms)
	I0307 09:31:02.707314    1980 fix.go:200] guest clock delta is within tolerance: -73.248498ms
	I0307 09:31:02.707317    1980 start.go:83] releasing machines lock for "addons-040000", held for 19.920423208s
	I0307 09:31:02.707579    1980 ssh_runner.go:195] Run: cat /version.json
	I0307 09:31:02.707589    1980 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/id_rsa Username:docker}
	I0307 09:31:02.707596    1980 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 09:31:02.707612    1980 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/id_rsa Username:docker}
	I0307 09:31:02.782708    1980 ssh_runner.go:195] Run: systemctl --version
	I0307 09:31:02.785162    1980 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 09:31:02.787216    1980 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 09:31:02.787247    1980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 09:31:02.793483    1980 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 09:31:02.793493    1980 start.go:494] detecting cgroup driver to use...
	I0307 09:31:02.793617    1980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 09:31:02.800256    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 09:31:02.803797    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 09:31:02.807233    1980 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 09:31:02.807258    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 09:31:02.810913    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 09:31:02.814818    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 09:31:02.818560    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 09:31:02.822559    1980 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 09:31:02.826476    1980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 09:31:02.830359    1980 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 09:31:02.834361    1980 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 09:31:02.838300    1980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 09:31:02.909283    1980 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 09:31:02.920178    1980 start.go:494] detecting cgroup driver to use...
	I0307 09:31:02.920257    1980 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 09:31:02.926122    1980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 09:31:02.931831    1980 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 09:31:02.939027    1980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 09:31:02.944506    1980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 09:31:02.949523    1980 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 09:31:02.998109    1980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 09:31:03.004646    1980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 09:31:03.011360    1980 ssh_runner.go:195] Run: which cri-dockerd
	I0307 09:31:03.012659    1980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 09:31:03.015746    1980 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 09:31:03.021628    1980 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 09:31:03.092697    1980 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 09:31:03.168536    1980 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 09:31:03.168599    1980 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 09:31:03.174616    1980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 09:31:03.237051    1980 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 09:31:04.395339    1980 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.158287208s)
	I0307 09:31:04.395410    1980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 09:31:04.401240    1980 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0307 09:31:04.412760    1980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 09:31:04.418303    1980 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 09:31:04.508674    1980 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 09:31:04.580422    1980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 09:31:04.640935    1980 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 09:31:04.647993    1980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 09:31:04.652908    1980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 09:31:04.743033    1980 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 09:31:04.765919    1980 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 09:31:04.765987    1980 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 09:31:04.768757    1980 start.go:562] Will wait 60s for crictl version
	I0307 09:31:04.768801    1980 ssh_runner.go:195] Run: which crictl
	I0307 09:31:04.770269    1980 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 09:31:04.791032    1980 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0307 09:31:04.791106    1980 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 09:31:04.802379    1980 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 09:31:04.812911    1980 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0307 09:31:04.813058    1980 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0307 09:31:04.814638    1980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 09:31:04.819099    1980 kubeadm.go:877] updating cluster {Name:addons-040000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:addons-040000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0307 09:31:04.819154    1980 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 09:31:04.819198    1980 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 09:31:04.824883    1980 docker.go:685] Got preloaded images: 
	I0307 09:31:04.824890    1980 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0307 09:31:04.824924    1980 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 09:31:04.828324    1980 ssh_runner.go:195] Run: which lz4
	I0307 09:31:04.829621    1980 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0307 09:31:04.830914    1980 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0307 09:31:04.830923    1980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (357941720 bytes)
	I0307 09:31:06.124960    1980 docker.go:649] duration metric: took 1.295382584s to copy over tarball
	I0307 09:31:06.125017    1980 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0307 09:31:07.221318    1980 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.096279541s)
	I0307 09:31:07.221344    1980 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0307 09:31:07.238038    1980 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 09:31:07.242247    1980 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0307 09:31:07.248056    1980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 09:31:07.311702    1980 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 09:31:09.386035    1980 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.074341042s)
	I0307 09:31:09.386124    1980 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 09:31:09.392821    1980 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 09:31:09.392832    1980 cache_images.go:84] Images are preloaded, skipping loading
	I0307 09:31:09.392844    1980 kubeadm.go:928] updating node { 192.168.105.2 8443 v1.28.4 docker true true} ...
	I0307 09:31:09.392931    1980 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-040000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-040000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 09:31:09.392990    1980 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 09:31:09.400666    1980 cni.go:84] Creating CNI manager for ""
	I0307 09:31:09.400679    1980 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 09:31:09.400690    1980 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 09:31:09.400713    1980 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-040000 NodeName:addons-040000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 09:31:09.400794    1980 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-040000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 09:31:09.400857    1980 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0307 09:31:09.404718    1980 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 09:31:09.404756    1980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 09:31:09.408095    1980 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0307 09:31:09.413821    1980 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 09:31:09.419493    1980 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0307 09:31:09.425453    1980 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0307 09:31:09.426816    1980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 09:31:09.430777    1980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 09:31:09.504998    1980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 09:31:09.512418    1980 certs.go:68] Setting up /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000 for IP: 192.168.105.2
	I0307 09:31:09.512425    1980 certs.go:194] generating shared ca certs ...
	I0307 09:31:09.512434    1980 certs.go:226] acquiring lock for ca certs: {Name:mkc8d76d77d4efc8795fd6159d984855be90a666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 09:31:09.512617    1980 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.key
	I0307 09:31:09.562511    1980 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.crt ...
	I0307 09:31:09.562523    1980 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.crt: {Name:mkd6827c80c99df41ab74892cea15a28663e863b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 09:31:09.562789    1980 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.key ...
	I0307 09:31:09.562793    1980 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.key: {Name:mk00c5316f706f5bbd9458775f20681bb1bc581d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 09:31:09.562918    1980 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/proxy-client-ca.key
	I0307 09:31:09.682336    1980 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18241-1349/.minikube/proxy-client-ca.crt ...
	I0307 09:31:09.682341    1980 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/proxy-client-ca.crt: {Name:mkad8a94068f150263dd18389f608679cdc0cd9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 09:31:09.682472    1980 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18241-1349/.minikube/proxy-client-ca.key ...
	I0307 09:31:09.682475    1980 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/proxy-client-ca.key: {Name:mk7e458a67f216d10c5191315bd930e8cd1637fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 09:31:09.682579    1980 certs.go:256] generating profile certs ...
	I0307 09:31:09.682615    1980 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.key
	I0307 09:31:09.682621    1980 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt with IP's: []
	I0307 09:31:09.919542    1980 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt ...
	I0307 09:31:09.919548    1980 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: {Name:mkde797960121721a4c1790bf0c03eeac4098c56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 09:31:09.919745    1980 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.key ...
	I0307 09:31:09.919749    1980 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.key: {Name:mk923255fc924cecce2afc54ece2303f22f4b513 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 09:31:09.919862    1980 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/apiserver.key.aae8b5c2
	I0307 09:31:09.919873    1980 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/apiserver.crt.aae8b5c2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0307 09:31:09.981094    1980 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/apiserver.crt.aae8b5c2 ...
	I0307 09:31:09.981105    1980 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/apiserver.crt.aae8b5c2: {Name:mkaa14fc6d553d8ace97e28435f440b85564ce29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 09:31:09.981336    1980 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/apiserver.key.aae8b5c2 ...
	I0307 09:31:09.981341    1980 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/apiserver.key.aae8b5c2: {Name:mk8a09d9b5ee82f0a26342fc25e8eb2382c89ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 09:31:09.981458    1980 certs.go:381] copying /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/apiserver.crt.aae8b5c2 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/apiserver.crt
	I0307 09:31:09.981599    1980 certs.go:385] copying /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/apiserver.key.aae8b5c2 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/apiserver.key
	I0307 09:31:09.981697    1980 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/proxy-client.key
	I0307 09:31:09.981707    1980 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/proxy-client.crt with IP's: []
	I0307 09:31:10.017758    1980 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/proxy-client.crt ...
	I0307 09:31:10.017762    1980 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/proxy-client.crt: {Name:mka4915ed6e20c36c626f813b1b3ec8128e0decc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 09:31:10.017876    1980 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/proxy-client.key ...
	I0307 09:31:10.017880    1980 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/proxy-client.key: {Name:mkf7be5341e2b0a1985e6f7cceb5b6f759eb4fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 09:31:10.018098    1980 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca-key.pem (1675 bytes)
	I0307 09:31:10.018292    1980 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem (1078 bytes)
	I0307 09:31:10.018310    1980 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem (1123 bytes)
	I0307 09:31:10.018405    1980 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/key.pem (1679 bytes)
	I0307 09:31:10.018826    1980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 09:31:10.027484    1980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 09:31:10.035487    1980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 09:31:10.043729    1980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 09:31:10.051618    1980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0307 09:31:10.059558    1980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 09:31:10.067638    1980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 09:31:10.075585    1980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 09:31:10.084038    1980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 09:31:10.092013    1980 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 09:31:10.098856    1980 ssh_runner.go:195] Run: openssl version
	I0307 09:31:10.101166    1980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 09:31:10.104932    1980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 09:31:10.106548    1980 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 17:31 /usr/share/ca-certificates/minikubeCA.pem
	I0307 09:31:10.106569    1980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 09:31:10.108746    1980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 09:31:10.112269    1980 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 09:31:10.113872    1980 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0307 09:31:10.113894    1980 kubeadm.go:391] StartCluster: {Name:addons-040000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:addons-040000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 09:31:10.113990    1980 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 09:31:10.119640    1980 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0307 09:31:10.123197    1980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 09:31:10.126683    1980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 09:31:10.130338    1980 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 09:31:10.130344    1980 kubeadm.go:156] found existing configuration files:
	
	I0307 09:31:10.130367    1980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0307 09:31:10.134041    1980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 09:31:10.134067    1980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 09:31:10.137793    1980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0307 09:31:10.141403    1980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 09:31:10.141431    1980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 09:31:10.144680    1980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0307 09:31:10.150535    1980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 09:31:10.150589    1980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 09:31:10.154096    1980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0307 09:31:10.157392    1980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 09:31:10.157423    1980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 09:31:10.161075    1980 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 09:31:10.189166    1980 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0307 09:31:10.189201    1980 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 09:31:10.242154    1980 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 09:31:10.242224    1980 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 09:31:10.242274    1980 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 09:31:10.341892    1980 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 09:31:10.359062    1980 out.go:204]   - Generating certificates and keys ...
	I0307 09:31:10.359100    1980 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 09:31:10.359131    1980 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 09:31:10.481082    1980 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0307 09:31:10.576755    1980 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0307 09:31:10.722412    1980 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0307 09:31:10.815283    1980 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0307 09:31:10.884581    1980 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0307 09:31:10.884640    1980 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-040000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0307 09:31:10.958743    1980 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0307 09:31:10.958824    1980 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-040000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0307 09:31:11.038495    1980 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0307 09:31:11.440461    1980 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0307 09:31:11.473121    1980 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0307 09:31:11.473151    1980 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 09:31:11.557757    1980 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 09:31:11.748662    1980 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 09:31:11.935497    1980 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 09:31:11.998037    1980 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 09:31:11.998243    1980 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 09:31:11.999370    1980 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 09:31:12.003606    1980 out.go:204]   - Booting up control plane ...
	I0307 09:31:12.003651    1980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 09:31:12.003685    1980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 09:31:12.003719    1980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 09:31:12.007519    1980 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 09:31:12.007903    1980 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 09:31:12.007928    1980 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 09:31:12.116048    1980 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 09:31:15.618033    1980 kubeadm.go:309] [apiclient] All control plane components are healthy after 3.502128 seconds
	I0307 09:31:15.618103    1980 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 09:31:15.623429    1980 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 09:31:16.131629    1980 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 09:31:16.131743    1980 kubeadm.go:309] [mark-control-plane] Marking the node addons-040000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 09:31:16.640048    1980 kubeadm.go:309] [bootstrap-token] Using token: 9cffgw.ca2xd2er3dam4hns
	I0307 09:31:16.645824    1980 out.go:204]   - Configuring RBAC rules ...
	I0307 09:31:16.645884    1980 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 09:31:16.645933    1980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 09:31:16.648069    1980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 09:31:16.649145    1980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 09:31:16.650952    1980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 09:31:16.652122    1980 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 09:31:16.656251    1980 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 09:31:16.801677    1980 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 09:31:17.046183    1980 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 09:31:17.046481    1980 kubeadm.go:309] 
	I0307 09:31:17.046520    1980 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 09:31:17.046530    1980 kubeadm.go:309] 
	I0307 09:31:17.046571    1980 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 09:31:17.046575    1980 kubeadm.go:309] 
	I0307 09:31:17.046587    1980 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 09:31:17.046619    1980 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 09:31:17.046644    1980 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 09:31:17.046647    1980 kubeadm.go:309] 
	I0307 09:31:17.046675    1980 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 09:31:17.046678    1980 kubeadm.go:309] 
	I0307 09:31:17.046706    1980 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 09:31:17.046708    1980 kubeadm.go:309] 
	I0307 09:31:17.046732    1980 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 09:31:17.046775    1980 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 09:31:17.046818    1980 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 09:31:17.046823    1980 kubeadm.go:309] 
	I0307 09:31:17.046874    1980 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 09:31:17.046915    1980 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 09:31:17.046918    1980 kubeadm.go:309] 
	I0307 09:31:17.046965    1980 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 9cffgw.ca2xd2er3dam4hns \
	I0307 09:31:17.047016    1980 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4f3f709732e35797580c9b8d11f82ef6c52734bdc7940106dd5836141654d720 \
	I0307 09:31:17.047034    1980 kubeadm.go:309] 	--control-plane 
	I0307 09:31:17.047040    1980 kubeadm.go:309] 
	I0307 09:31:17.047075    1980 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 09:31:17.047078    1980 kubeadm.go:309] 
	I0307 09:31:17.047114    1980 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 9cffgw.ca2xd2er3dam4hns \
	I0307 09:31:17.047168    1980 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4f3f709732e35797580c9b8d11f82ef6c52734bdc7940106dd5836141654d720 
	I0307 09:31:17.047218    1980 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 09:31:17.047224    1980 cni.go:84] Creating CNI manager for ""
	I0307 09:31:17.047232    1980 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 09:31:17.051317    1980 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0307 09:31:17.057345    1980 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0307 09:31:17.061451    1980 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0307 09:31:17.067918    1980 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 09:31:17.068029    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:17.068030    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-040000 minikube.k8s.io/updated_at=2024_03_07T09_31_17_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c2be9b5c96c7962d271300acacf405d2402b272f minikube.k8s.io/name=addons-040000 minikube.k8s.io/primary=true
	I0307 09:31:17.072825    1980 ops.go:34] apiserver oom_adj: -16
	I0307 09:31:17.119930    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:17.621986    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:18.121946    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:18.622041    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:19.121987    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:19.621990    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:20.121993    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:20.621992    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:21.121960    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:21.621938    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:22.121993    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:22.621962    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:23.121940    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:23.621260    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:24.121943    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:24.621940    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:25.121962    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:25.621929    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:26.121873    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:26.621937    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:27.121862    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:27.621875    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:28.121889    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:28.621865    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:29.121870    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:29.621872    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:30.121775    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:30.620545    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:31.121858    1980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 09:31:31.158581    1980 kubeadm.go:1106] duration metric: took 14.0907535s to wait for elevateKubeSystemPrivileges
	W0307 09:31:31.158629    1980 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 09:31:31.158633    1980 kubeadm.go:393] duration metric: took 21.044985291s to StartCluster
	I0307 09:31:31.158641    1980 settings.go:142] acquiring lock: {Name:mke72688bb63f8128eac153bbf90929d78ec9d29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 09:31:31.158792    1980 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 09:31:31.159003    1980 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/kubeconfig: {Name:mkeef9e7922e618c2ac8219607b646aeaf5f61cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 09:31:31.159233    1980 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0307 09:31:31.159261    1980 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 09:31:31.162970    1980 out.go:177] * Verifying Kubernetes components...
	I0307 09:31:31.159306    1980 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0307 09:31:31.159551    1980 config.go:182] Loaded profile config "addons-040000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 09:31:31.170066    1980 addons.go:69] Setting yakd=true in profile "addons-040000"
	I0307 09:31:31.170077    1980 addons.go:69] Setting inspektor-gadget=true in profile "addons-040000"
	I0307 09:31:31.170095    1980 addons.go:234] Setting addon yakd=true in "addons-040000"
	I0307 09:31:31.170098    1980 addons.go:234] Setting addon inspektor-gadget=true in "addons-040000"
	I0307 09:31:31.170114    1980 host.go:66] Checking if "addons-040000" exists ...
	I0307 09:31:31.170118    1980 host.go:66] Checking if "addons-040000" exists ...
	I0307 09:31:31.170173    1980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 09:31:31.170182    1980 addons.go:69] Setting cloud-spanner=true in profile "addons-040000"
	I0307 09:31:31.170186    1980 addons.go:69] Setting default-storageclass=true in profile "addons-040000"
	I0307 09:31:31.170194    1980 addons.go:69] Setting ingress=true in profile "addons-040000"
	I0307 09:31:31.170192    1980 addons.go:69] Setting storage-provisioner=true in profile "addons-040000"
	I0307 09:31:31.170201    1980 addons.go:234] Setting addon ingress=true in "addons-040000"
	I0307 09:31:31.170165    1980 addons.go:69] Setting registry=true in profile "addons-040000"
	I0307 09:31:31.170221    1980 addons.go:234] Setting addon storage-provisioner=true in "addons-040000"
	I0307 09:31:31.170282    1980 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-040000"
	I0307 09:31:31.170190    1980 addons.go:234] Setting addon cloud-spanner=true in "addons-040000"
	I0307 09:31:31.170309    1980 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-040000"
	I0307 09:31:31.170316    1980 host.go:66] Checking if "addons-040000" exists ...
	I0307 09:31:31.170223    1980 host.go:66] Checking if "addons-040000" exists ...
	I0307 09:31:31.170364    1980 host.go:66] Checking if "addons-040000" exists ...
	I0307 09:31:31.170312    1980 host.go:66] Checking if "addons-040000" exists ...
	I0307 09:31:31.170173    1980 addons.go:69] Setting metrics-server=true in profile "addons-040000"
	I0307 09:31:31.170591    1980 addons.go:234] Setting addon metrics-server=true in "addons-040000"
	I0307 09:31:31.170226    1980 addons.go:69] Setting gcp-auth=true in profile "addons-040000"
	I0307 09:31:31.170604    1980 host.go:66] Checking if "addons-040000" exists ...
	I0307 09:31:31.170624    1980 mustload.go:65] Loading cluster: addons-040000
	I0307 09:31:31.170230    1980 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-040000"
	I0307 09:31:31.170754    1980 retry.go:31] will retry after 923.211199ms: connect: dial unix /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/monitor: connect: connection refused
	I0307 09:31:31.170768    1980 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-040000"
	I0307 09:31:31.170770    1980 config.go:182] Loaded profile config "addons-040000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 09:31:31.170779    1980 host.go:66] Checking if "addons-040000" exists ...
	I0307 09:31:31.170246    1980 addons.go:234] Setting addon registry=true in "addons-040000"
	I0307 09:31:31.170814    1980 host.go:66] Checking if "addons-040000" exists ...
	I0307 09:31:31.170839    1980 retry.go:31] will retry after 1.404404252s: connect: dial unix /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/monitor: connect: connection refused
	I0307 09:31:31.170178    1980 addons.go:69] Setting volumesnapshots=true in profile "addons-040000"
	I0307 09:31:31.170851    1980 addons.go:234] Setting addon volumesnapshots=true in "addons-040000"
	I0307 09:31:31.170850    1980 retry.go:31] will retry after 934.01822ms: connect: dial unix /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/monitor: connect: connection refused
	I0307 09:31:31.170226    1980 addons.go:69] Setting ingress-dns=true in profile "addons-040000"
	I0307 09:31:31.170864    1980 host.go:66] Checking if "addons-040000" exists ...
	I0307 09:31:31.170868    1980 addons.go:234] Setting addon ingress-dns=true in "addons-040000"
	I0307 09:31:31.170875    1980 host.go:66] Checking if "addons-040000" exists ...
	I0307 09:31:31.170936    1980 retry.go:31] will retry after 1.47709835s: connect: dial unix /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/monitor: connect: connection refused
	I0307 09:31:31.170948    1980 retry.go:31] will retry after 707.013878ms: connect: dial unix /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/monitor: connect: connection refused
	I0307 09:31:31.170174    1980 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-040000"
	I0307 09:31:31.170975    1980 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-040000"
	I0307 09:31:31.170271    1980 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-040000"
	I0307 09:31:31.171027    1980 retry.go:31] will retry after 1.029491327s: connect: dial unix /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/monitor: connect: connection refused
	I0307 09:31:31.171124    1980 retry.go:31] will retry after 655.185929ms: connect: dial unix /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/monitor: connect: connection refused
	I0307 09:31:31.171147    1980 retry.go:31] will retry after 593.32146ms: connect: dial unix /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/monitor: connect: connection refused
	I0307 09:31:31.171195    1980 retry.go:31] will retry after 918.77915ms: connect: dial unix /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/monitor: connect: connection refused
	I0307 09:31:31.171227    1980 retry.go:31] will retry after 1.316257198s: connect: dial unix /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/monitor: connect: connection refused
	I0307 09:31:31.171287    1980 retry.go:31] will retry after 1.308227771s: connect: dial unix /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/monitor: connect: connection refused
	I0307 09:31:31.171357    1980 retry.go:31] will retry after 1.220391654s: connect: dial unix /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/monitor: connect: connection refused
	I0307 09:31:31.175914    1980 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0307 09:31:31.181979    1980 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0307 09:31:31.193899    1980 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 09:31:31.188028    1980 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0307 09:31:31.197959    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0307 09:31:31.197974    1980 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/id_rsa Username:docker}
	I0307 09:31:31.200953    1980 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 09:31:31.204054    1980 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0307 09:31:31.204062    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0307 09:31:31.204069    1980 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/id_rsa Username:docker}
	I0307 09:31:31.239853    1980 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0307 09:31:31.286181    1980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 09:31:31.302888    1980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0307 09:31:31.408334    1980 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0307 09:31:31.408345    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0307 09:31:31.425074    1980 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0307 09:31:31.425087    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0307 09:31:31.452086    1980 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0307 09:31:31.452097    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0307 09:31:31.463745    1980 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0307 09:31:31.463758    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0307 09:31:31.472237    1980 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0307 09:31:31.472250    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0307 09:31:31.482565    1980 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0307 09:31:31.482578    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0307 09:31:31.512392    1980 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0307 09:31:31.512403    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0307 09:31:31.520411    1980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0307 09:31:31.773155    1980 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0307 09:31:31.783116    1980 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0307 09:31:31.790063    1980 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0307 09:31:31.799135    1980 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0307 09:31:31.809158    1980 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0307 09:31:31.821115    1980 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0307 09:31:31.830053    1980 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0307 09:31:31.839071    1980 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0307 09:31:31.847141    1980 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0307 09:31:31.843194    1980 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0307 09:31:31.848665    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0307 09:31:31.848677    1980 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/id_rsa Username:docker}
	I0307 09:31:31.848730    1980 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0307 09:31:31.848737    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0307 09:31:31.848743    1980 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/id_rsa Username:docker}
	I0307 09:31:31.884915    1980 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0307 09:31:31.893150    1980 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0307 09:31:31.893163    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0307 09:31:31.893174    1980 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/id_rsa Username:docker}
	I0307 09:31:32.031865    1980 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0307 09:31:32.031877    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0307 09:31:32.039885    1980 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0307 09:31:32.039895    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0307 09:31:32.055180    1980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0307 09:31:32.058759    1980 start.go:948] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0307 09:31:32.060080    1980 node_ready.go:35] waiting up to 6m0s for node "addons-040000" to be "Ready" ...
	I0307 09:31:32.062004    1980 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0307 09:31:32.062014    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0307 09:31:32.069673    1980 node_ready.go:49] node "addons-040000" has status "Ready":"True"
	I0307 09:31:32.069693    1980 node_ready.go:38] duration metric: took 9.5905ms for node "addons-040000" to be "Ready" ...
	I0307 09:31:32.069708    1980 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 09:31:32.081601    1980 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0307 09:31:32.081612    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0307 09:31:32.083491    1980 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bwktx" in "kube-system" namespace to be "Ready" ...
	I0307 09:31:32.093111    1980 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0307 09:31:32.097118    1980 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0307 09:31:32.097131    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0307 09:31:32.097142    1980 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/id_rsa Username:docker}
	I0307 09:31:32.097406    1980 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0307 09:31:32.097411    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0307 09:31:32.102059    1980 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 09:31:32.114059    1980 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0307 09:31:32.108196    1980 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 09:31:32.120123    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 09:31:32.120138    1980 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/id_rsa Username:docker}
	I0307 09:31:32.120212    1980 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0307 09:31:32.120217    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0307 09:31:32.120222    1980 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/id_rsa Username:docker}
	I0307 09:31:32.148425    1980 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0307 09:31:32.148435    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0307 09:31:32.160079    1980 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0307 09:31:32.160090    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0307 09:31:32.173341    1980 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0307 09:31:32.173351    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0307 09:31:32.192803    1980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0307 09:31:32.199539    1980 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0307 09:31:32.199552    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0307 09:31:32.201145    1980 host.go:66] Checking if "addons-040000" exists ...
	I0307 09:31:32.241554    1980 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0307 09:31:32.241567    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0307 09:31:32.248661    1980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 09:31:32.249204    1980 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0307 09:31:32.249210    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0307 09:31:32.271760    1980 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0307 09:31:32.271772    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0307 09:31:32.279787    1980 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0307 09:31:32.279799    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0307 09:31:32.291650    1980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0307 09:31:32.315227    1980 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0307 09:31:32.315237    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0307 09:31:32.325609    1980 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0307 09:31:32.325621    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0307 09:31:32.358713    1980 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 09:31:32.358724    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0307 09:31:32.382861    1980 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0307 09:31:32.382876    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0307 09:31:32.394810    1980 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-040000"
	I0307 09:31:32.394832    1980 host.go:66] Checking if "addons-040000" exists ...
	I0307 09:31:32.401570    1980 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0307 09:31:32.409615    1980 out.go:177]   - Using image docker.io/busybox:stable
	I0307 09:31:32.413661    1980 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0307 09:31:32.413670    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0307 09:31:32.413679    1980 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/id_rsa Username:docker}
	I0307 09:31:32.422566    1980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 09:31:32.482620    1980 addons.go:234] Setting addon default-storageclass=true in "addons-040000"
	I0307 09:31:32.482641    1980 host.go:66] Checking if "addons-040000" exists ...
	I0307 09:31:32.483386    1980 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 09:31:32.483393    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 09:31:32.483400    1980 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/id_rsa Username:docker}
	I0307 09:31:32.491933    1980 out.go:177]   - Using image docker.io/registry:2.8.3
	I0307 09:31:32.495990    1980 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0307 09:31:32.500042    1980 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0307 09:31:32.500050    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0307 09:31:32.500060    1980 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/id_rsa Username:docker}
	I0307 09:31:32.500351    1980 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0307 09:31:32.500358    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0307 09:31:32.565136    1980 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-040000" context rescaled to 1 replicas
	I0307 09:31:32.573868    1980 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0307 09:31:32.573879    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0307 09:31:32.578949    1980 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0307 09:31:32.582977    1980 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0307 09:31:32.582985    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0307 09:31:32.582994    1980 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/id_rsa Username:docker}
	I0307 09:31:32.583273    1980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0307 09:31:32.591105    1980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0307 09:31:32.654019    1980 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0307 09:31:32.657973    1980 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0307 09:31:32.657981    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0307 09:31:32.657991    1980 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/id_rsa Username:docker}
	I0307 09:31:32.683637    1980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 09:31:32.712037    1980 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0307 09:31:32.712052    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0307 09:31:32.729009    1980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0307 09:31:32.729020    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0307 09:31:32.746192    1980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0307 09:31:32.746205    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0307 09:31:32.753818    1980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 09:31:32.753829    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0307 09:31:32.768990    1980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 09:31:32.779907    1980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0307 09:31:32.851322    1980 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0307 09:31:32.851334    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0307 09:31:32.951196    1980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0307 09:31:34.207625    1980 pod_ready.go:102] pod "coredns-5dd5756b68-bwktx" in "kube-system" namespace has status "Ready":"False"
	I0307 09:31:34.783141    1980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.480275709s)
	I0307 09:31:34.783154    1980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.262765458s)
	I0307 09:31:34.783179    1980 addons.go:470] Verifying addon ingress=true in "addons-040000"
	I0307 09:31:34.783203    1980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.728044s)
	I0307 09:31:34.788024    1980 out.go:177] * Verifying ingress addon...
	I0307 09:31:34.783258    1980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (2.590467791s)
	I0307 09:31:34.783283    1980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.53463325s)
	I0307 09:31:34.783304    1980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.491670959s)
	I0307 09:31:34.783344    1980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.360794959s)
	I0307 09:31:34.783370    1980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (2.200116292s)
	I0307 09:31:34.798897    1980 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-040000 service yakd-dashboard -n yakd-dashboard
	
	W0307 09:31:34.788113    1980 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0307 09:31:34.796278    1980 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0307 09:31:34.802010    1980 retry.go:31] will retry after 228.828799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0307 09:31:34.819092    1980 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0307 09:31:34.819102    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:35.032975    1980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 09:31:35.105335    1980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.514238292s)
	I0307 09:31:35.105347    1980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.421726833s)
	I0307 09:31:35.105354    1980 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-040000"
	I0307 09:31:35.111928    1980 out.go:177] * Verifying csi-hostpath-driver addon...
	I0307 09:31:35.105428    1980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.325539333s)
	I0307 09:31:35.105440    1980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.154256042s)
	I0307 09:31:35.105460    1980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.336442375s)
	I0307 09:31:35.111956    1980 addons.go:470] Verifying addon metrics-server=true in "addons-040000"
	I0307 09:31:35.111970    1980 addons.go:470] Verifying addon registry=true in "addons-040000"
	I0307 09:31:35.122898    1980 out.go:177] * Verifying registry addon...
	I0307 09:31:35.119376    1980 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0307 09:31:35.130394    1980 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0307 09:31:35.136501    1980 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0307 09:31:35.136511    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:35.136587    1980 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0307 09:31:35.136591    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:35.305705    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:35.628013    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:35.634709    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:35.814527    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:36.130178    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:36.132305    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:36.306533    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:36.588347    1980 pod_ready.go:102] pod "coredns-5dd5756b68-bwktx" in "kube-system" namespace has status "Ready":"False"
	I0307 09:31:36.627274    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:36.633501    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:36.808301    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:37.130239    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:37.133159    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:37.306053    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:37.627637    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:37.632850    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:37.806944    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:38.127574    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:38.132944    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:38.375234    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:38.589073    1980 pod_ready.go:102] pod "coredns-5dd5756b68-bwktx" in "kube-system" namespace has status "Ready":"False"
	I0307 09:31:38.627949    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:38.633822    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:38.805158    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:38.806161    1980 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0307 09:31:38.806172    1980 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/id_rsa Username:docker}
	I0307 09:31:38.830555    1980 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0307 09:31:38.836541    1980 addons.go:234] Setting addon gcp-auth=true in "addons-040000"
	I0307 09:31:38.836562    1980 host.go:66] Checking if "addons-040000" exists ...
	I0307 09:31:38.837407    1980 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0307 09:31:38.837415    1980 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/addons-040000/id_rsa Username:docker}
	I0307 09:31:38.863479    1980 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 09:31:38.866453    1980 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0307 09:31:38.870408    1980 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0307 09:31:38.870413    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0307 09:31:38.878206    1980 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0307 09:31:38.878216    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0307 09:31:38.884326    1980 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0307 09:31:38.884335    1980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0307 09:31:38.889807    1980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0307 09:31:39.125556    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:39.136801    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:39.154201    1980 addons.go:470] Verifying addon gcp-auth=true in "addons-040000"
	I0307 09:31:39.157937    1980 out.go:177] * Verifying gcp-auth addon...
	I0307 09:31:39.165609    1980 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0307 09:31:39.168441    1980 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0307 09:31:39.168448    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:39.305902    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:39.627722    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:39.632909    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:39.668965    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:39.806027    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:40.127216    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:40.133171    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:40.168879    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:40.306151    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:40.627814    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:40.633073    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:40.669094    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:40.806275    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:41.087533    1980 pod_ready.go:102] pod "coredns-5dd5756b68-bwktx" in "kube-system" namespace has status "Ready":"False"
	I0307 09:31:41.127293    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:41.132711    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:41.168619    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:41.305859    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:41.627774    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:41.632755    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:41.669130    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:41.806295    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:42.088562    1980 pod_ready.go:97] pod "coredns-5dd5756b68-bwktx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-07 09:31:31 -0800 PST Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-07 09:31:31 -0800 PST Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-07 09:31:31 -0800 PST Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-07 09:31:31 -0800 PST Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-03-07 09:31:31 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSt
ateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-07 09:31:31 -0800 PST,FinishedAt:2024-03-07 09:31:41 -0800 PST,ContainerID:docker://55c84fa36497d2337b191def05c44e73e602ad01d6979e22700ec28a917fbd97,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://55c84fa36497d2337b191def05c44e73e602ad01d6979e22700ec28a917fbd97 Started:0x14003cce3a0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0307 09:31:42.088574    1980 pod_ready.go:81] duration metric: took 10.0051845s for pod "coredns-5dd5756b68-bwktx" in "kube-system" namespace to be "Ready" ...
	E0307 09:31:42.088580    1980 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-bwktx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-07 09:31:31 -0800 PST Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-07 09:31:31 -0800 PST Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-07 09:31:31 -0800 PST Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-07 09:31:31 -0800 PST Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-03-07 09:31:31 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runnin
g:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-07 09:31:31 -0800 PST,FinishedAt:2024-03-07 09:31:41 -0800 PST,ContainerID:docker://55c84fa36497d2337b191def05c44e73e602ad01d6979e22700ec28a917fbd97,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://55c84fa36497d2337b191def05c44e73e602ad01d6979e22700ec28a917fbd97 Started:0x14003cce3a0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0307 09:31:42.088584    1980 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nkks4" in "kube-system" namespace to be "Ready" ...
	I0307 09:31:42.126306    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:42.133463    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:42.169194    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:42.305379    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:42.627357    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:42.632999    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:42.669274    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:42.806268    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:43.127319    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:43.133122    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:43.169142    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:43.306215    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:43.627571    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:43.633041    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:43.669008    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:43.805976    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:44.093303    1980 pod_ready.go:102] pod "coredns-5dd5756b68-nkks4" in "kube-system" namespace has status "Ready":"False"
	I0307 09:31:44.127258    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:44.133053    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:44.169265    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:44.306084    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:44.627651    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:44.632997    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:44.669354    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:44.806147    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:45.125893    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:45.134272    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:45.169444    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:45.306301    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:45.627636    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:45.632867    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:45.669155    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:45.806259    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:46.093491    1980 pod_ready.go:102] pod "coredns-5dd5756b68-nkks4" in "kube-system" namespace has status "Ready":"False"
	I0307 09:31:46.127130    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:46.132170    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:46.168882    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:46.305988    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:46.627527    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:46.632859    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:46.669104    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:46.806162    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:47.127280    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:47.133230    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:47.169030    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:47.306178    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:47.627401    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:47.633975    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:47.669427    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:47.805937    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:48.093703    1980 pod_ready.go:102] pod "coredns-5dd5756b68-nkks4" in "kube-system" namespace has status "Ready":"False"
	I0307 09:31:48.129427    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:48.132359    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:48.169424    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:48.306434    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:48.627426    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:48.633112    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:48.669312    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:48.806068    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:49.127788    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:49.132720    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:49.169279    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:49.305968    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:49.627342    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:49.633153    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:49.670017    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:49.806004    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:50.093719    1980 pod_ready.go:102] pod "coredns-5dd5756b68-nkks4" in "kube-system" namespace has status "Ready":"False"
	I0307 09:31:50.127155    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:50.133162    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:50.167359    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:50.305983    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:50.626229    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:50.633107    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:50.669819    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:50.805954    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:51.127020    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:51.132560    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:51.168992    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:51.304062    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:51.627116    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:51.632807    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:51.669371    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:51.805829    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:52.093733    1980 pod_ready.go:102] pod "coredns-5dd5756b68-nkks4" in "kube-system" namespace has status "Ready":"False"
	I0307 09:31:52.126986    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:52.134237    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:52.170033    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:52.305732    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:52.626244    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:52.633085    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:52.669665    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:52.805988    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:53.127158    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:53.132882    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:53.168937    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:53.305778    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:53.627013    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:53.632903    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:53.669054    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:53.806111    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:54.127160    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:54.132712    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:54.169014    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:54.306138    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:54.592269    1980 pod_ready.go:102] pod "coredns-5dd5756b68-nkks4" in "kube-system" namespace has status "Ready":"False"
	I0307 09:31:54.626987    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:54.632843    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:54.668383    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:54.806142    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:55.127124    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:55.132750    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:55.168897    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:55.305747    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:55.627135    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:55.632730    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:55.669112    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:55.805720    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:56.127046    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:56.132169    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:56.168694    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:56.305836    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:56.593325    1980 pod_ready.go:102] pod "coredns-5dd5756b68-nkks4" in "kube-system" namespace has status "Ready":"False"
	I0307 09:31:56.626827    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:56.632692    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:56.668660    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:56.805876    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:57.127002    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:57.132605    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:57.168726    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:57.305689    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:57.627011    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:57.632503    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:57.668654    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:57.805883    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:58.127088    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:58.132677    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:58.168900    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:58.306023    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:58.627374    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:58.632768    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:58.669113    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:58.805753    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:59.092570    1980 pod_ready.go:102] pod "coredns-5dd5756b68-nkks4" in "kube-system" namespace has status "Ready":"False"
	I0307 09:31:59.126868    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:59.132792    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:59.168619    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:59.305764    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:31:59.626857    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:31:59.632635    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:31:59.668733    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:31:59.807068    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:00.127012    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:00.132631    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:00.168797    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:00.305744    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:00.626734    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:00.632867    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:00.668584    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:00.805315    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:01.093206    1980 pod_ready.go:102] pod "coredns-5dd5756b68-nkks4" in "kube-system" namespace has status "Ready":"False"
	I0307 09:32:01.126966    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:01.132883    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:01.168950    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:01.306016    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:01.593198    1980 pod_ready.go:92] pod "coredns-5dd5756b68-nkks4" in "kube-system" namespace has status "Ready":"True"
	I0307 09:32:01.593209    1980 pod_ready.go:81] duration metric: took 19.504846125s for pod "coredns-5dd5756b68-nkks4" in "kube-system" namespace to be "Ready" ...
	I0307 09:32:01.593214    1980 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-040000" in "kube-system" namespace to be "Ready" ...
	I0307 09:32:01.595473    1980 pod_ready.go:92] pod "etcd-addons-040000" in "kube-system" namespace has status "Ready":"True"
	I0307 09:32:01.595479    1980 pod_ready.go:81] duration metric: took 2.262125ms for pod "etcd-addons-040000" in "kube-system" namespace to be "Ready" ...
	I0307 09:32:01.595483    1980 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-040000" in "kube-system" namespace to be "Ready" ...
	I0307 09:32:01.597845    1980 pod_ready.go:92] pod "kube-apiserver-addons-040000" in "kube-system" namespace has status "Ready":"True"
	I0307 09:32:01.597851    1980 pod_ready.go:81] duration metric: took 2.365583ms for pod "kube-apiserver-addons-040000" in "kube-system" namespace to be "Ready" ...
	I0307 09:32:01.597855    1980 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-040000" in "kube-system" namespace to be "Ready" ...
	I0307 09:32:01.600097    1980 pod_ready.go:92] pod "kube-controller-manager-addons-040000" in "kube-system" namespace has status "Ready":"True"
	I0307 09:32:01.600102    1980 pod_ready.go:81] duration metric: took 2.243916ms for pod "kube-controller-manager-addons-040000" in "kube-system" namespace to be "Ready" ...
	I0307 09:32:01.600106    1980 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ddvf7" in "kube-system" namespace to be "Ready" ...
	I0307 09:32:01.602862    1980 pod_ready.go:92] pod "kube-proxy-ddvf7" in "kube-system" namespace has status "Ready":"True"
	I0307 09:32:01.602868    1980 pod_ready.go:81] duration metric: took 2.758291ms for pod "kube-proxy-ddvf7" in "kube-system" namespace to be "Ready" ...
	I0307 09:32:01.602871    1980 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-040000" in "kube-system" namespace to be "Ready" ...
	I0307 09:32:01.627217    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:01.632681    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:01.668990    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:01.804487    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:01.994109    1980 pod_ready.go:92] pod "kube-scheduler-addons-040000" in "kube-system" namespace has status "Ready":"True"
	I0307 09:32:01.994118    1980 pod_ready.go:81] duration metric: took 391.247708ms for pod "kube-scheduler-addons-040000" in "kube-system" namespace to be "Ready" ...
	I0307 09:32:01.994121    1980 pod_ready.go:38] duration metric: took 29.92475475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 09:32:01.994131    1980 api_server.go:52] waiting for apiserver process to appear ...
	I0307 09:32:01.994196    1980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 09:32:02.001109    1980 api_server.go:72] duration metric: took 30.842192875s to wait for apiserver process to appear ...
	I0307 09:32:02.001118    1980 api_server.go:88] waiting for apiserver healthz status ...
	I0307 09:32:02.001125    1980 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0307 09:32:02.005555    1980 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0307 09:32:02.006148    1980 api_server.go:141] control plane version: v1.28.4
	I0307 09:32:02.006155    1980 api_server.go:131] duration metric: took 5.034333ms to wait for apiserver health ...
	I0307 09:32:02.006158    1980 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 09:32:02.127493    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:02.133018    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:02.168656    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:02.198134    1980 system_pods.go:59] 17 kube-system pods found
	I0307 09:32:02.198146    1980 system_pods.go:61] "coredns-5dd5756b68-nkks4" [73738848-193a-406e-8165-78e73e6eed1c] Running
	I0307 09:32:02.198151    1980 system_pods.go:61] "csi-hostpath-attacher-0" [5c6e1671-ae92-4203-a9dd-ae17a3355fc7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0307 09:32:02.198154    1980 system_pods.go:61] "csi-hostpath-resizer-0" [0d08e605-dd65-4f74-8315-cf923f6c2946] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0307 09:32:02.198157    1980 system_pods.go:61] "csi-hostpathplugin-qfb8j" [60151c1c-8c8f-4723-a632-7bda361c5154] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0307 09:32:02.198160    1980 system_pods.go:61] "etcd-addons-040000" [711d20a4-2398-4f0c-b03b-af7638beffb7] Running
	I0307 09:32:02.198162    1980 system_pods.go:61] "kube-apiserver-addons-040000" [8a4f5e95-f259-4068-87d7-19816017c9ee] Running
	I0307 09:32:02.198164    1980 system_pods.go:61] "kube-controller-manager-addons-040000" [a83398e9-b137-4e53-812c-0101dfc2aad3] Running
	I0307 09:32:02.198167    1980 system_pods.go:61] "kube-ingress-dns-minikube" [05579015-4b13-4c6d-a462-eb2c7db03546] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0307 09:32:02.198169    1980 system_pods.go:61] "kube-proxy-ddvf7" [8db41cfd-fc0c-4c44-ae44-69490e9c999b] Running
	I0307 09:32:02.198180    1980 system_pods.go:61] "kube-scheduler-addons-040000" [3ff14fdd-67f5-41b6-854a-9dafe80f5e56] Running
	I0307 09:32:02.198185    1980 system_pods.go:61] "metrics-server-69cf46c98-vl44c" [a61e5c1a-873a-4244-9696-dc91d18176c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0307 09:32:02.198189    1980 system_pods.go:61] "nvidia-device-plugin-daemonset-pxhm2" [a5b807d3-67be-4a1d-baaf-1e0d7aac4caa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0307 09:32:02.198193    1980 system_pods.go:61] "registry-4jsfg" [1625931b-1b36-4877-9d0e-5a1ec025d3a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0307 09:32:02.198196    1980 system_pods.go:61] "registry-proxy-t4p6j" [064ff01e-79aa-4a33-bdd3-077357314bb1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0307 09:32:02.198200    1980 system_pods.go:61] "snapshot-controller-58dbcc7b99-4r5kf" [b26a03e6-20a4-4f2e-afd9-1e5e70996c25] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 09:32:02.198203    1980 system_pods.go:61] "snapshot-controller-58dbcc7b99-wvz8l" [bd2bdfd5-80be-4f6b-a96e-637fb9a7c714] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 09:32:02.198206    1980 system_pods.go:61] "storage-provisioner" [e001f08f-4c40-4447-9e72-687d3106c9b2] Running
	I0307 09:32:02.198209    1980 system_pods.go:74] duration metric: took 192.05ms to wait for pod list to return data ...
	I0307 09:32:02.198214    1980 default_sa.go:34] waiting for default service account to be created ...
	I0307 09:32:02.305978    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:02.285585    1980 default_sa.go:45] found service account: "default"
	I0307 09:32:02.285596    1980 default_sa.go:55] duration metric: took 195.833459ms for default service account to be created ...
	I0307 09:32:02.285600    1980 system_pods.go:116] waiting for k8s-apps to be running ...
	I0307 09:32:02.489155    1980 system_pods.go:86] 17 kube-system pods found
	I0307 09:32:02.489166    1980 system_pods.go:89] "coredns-5dd5756b68-nkks4" [73738848-193a-406e-8165-78e73e6eed1c] Running
	I0307 09:32:02.489170    1980 system_pods.go:89] "csi-hostpath-attacher-0" [5c6e1671-ae92-4203-a9dd-ae17a3355fc7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0307 09:32:02.489174    1980 system_pods.go:89] "csi-hostpath-resizer-0" [0d08e605-dd65-4f74-8315-cf923f6c2946] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0307 09:32:02.489177    1980 system_pods.go:89] "csi-hostpathplugin-qfb8j" [60151c1c-8c8f-4723-a632-7bda361c5154] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0307 09:32:02.489179    1980 system_pods.go:89] "etcd-addons-040000" [711d20a4-2398-4f0c-b03b-af7638beffb7] Running
	I0307 09:32:02.489181    1980 system_pods.go:89] "kube-apiserver-addons-040000" [8a4f5e95-f259-4068-87d7-19816017c9ee] Running
	I0307 09:32:02.489183    1980 system_pods.go:89] "kube-controller-manager-addons-040000" [a83398e9-b137-4e53-812c-0101dfc2aad3] Running
	I0307 09:32:02.489186    1980 system_pods.go:89] "kube-ingress-dns-minikube" [05579015-4b13-4c6d-a462-eb2c7db03546] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0307 09:32:02.489188    1980 system_pods.go:89] "kube-proxy-ddvf7" [8db41cfd-fc0c-4c44-ae44-69490e9c999b] Running
	I0307 09:32:02.489190    1980 system_pods.go:89] "kube-scheduler-addons-040000" [3ff14fdd-67f5-41b6-854a-9dafe80f5e56] Running
	I0307 09:32:02.489193    1980 system_pods.go:89] "metrics-server-69cf46c98-vl44c" [a61e5c1a-873a-4244-9696-dc91d18176c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0307 09:32:02.489197    1980 system_pods.go:89] "nvidia-device-plugin-daemonset-pxhm2" [a5b807d3-67be-4a1d-baaf-1e0d7aac4caa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0307 09:32:02.489199    1980 system_pods.go:89] "registry-4jsfg" [1625931b-1b36-4877-9d0e-5a1ec025d3a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0307 09:32:02.489203    1980 system_pods.go:89] "registry-proxy-t4p6j" [064ff01e-79aa-4a33-bdd3-077357314bb1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0307 09:32:02.489205    1980 system_pods.go:89] "snapshot-controller-58dbcc7b99-4r5kf" [b26a03e6-20a4-4f2e-afd9-1e5e70996c25] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 09:32:02.489208    1980 system_pods.go:89] "snapshot-controller-58dbcc7b99-wvz8l" [bd2bdfd5-80be-4f6b-a96e-637fb9a7c714] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 09:32:02.489210    1980 system_pods.go:89] "storage-provisioner" [e001f08f-4c40-4447-9e72-687d3106c9b2] Running
	I0307 09:32:02.489214    1980 system_pods.go:126] duration metric: took 203.613292ms to wait for k8s-apps to be running ...
	I0307 09:32:02.489217    1980 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 09:32:02.489279    1980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 09:32:02.495801    1980 system_svc.go:56] duration metric: took 6.579708ms WaitForService to wait for kubelet
	I0307 09:32:02.495810    1980 kubeadm.go:576] duration metric: took 31.445354167s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 09:32:02.495822    1980 node_conditions.go:102] verifying NodePressure condition ...
	I0307 09:32:02.517116    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:02.525009    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:02.560647    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:02.685683    1980 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0307 09:32:02.685694    1980 node_conditions.go:123] node cpu capacity is 2
	I0307 09:32:02.685700    1980 node_conditions.go:105] duration metric: took 189.878083ms to run NodePressure ...
	I0307 09:32:02.685706    1980 start.go:240] waiting for startup goroutines ...
	I0307 09:32:02.697132    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:03.017088    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:03.024634    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:03.060485    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:03.197650    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:03.517882    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:03.524608    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:03.560190    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:03.697493    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:04.019005    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:04.024760    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:04.060330    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:04.197449    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:04.518949    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:04.523918    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:04.559358    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:04.697466    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:05.018917    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:05.024047    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:05.060431    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:05.197641    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:05.518814    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:05.524134    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:05.562276    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:05.697083    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:06.018724    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:06.024331    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:06.064620    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:06.197543    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:06.519063    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:06.523941    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:06.560167    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:06.697301    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:07.018841    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:07.024123    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:07.060261    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:07.196871    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:07.521837    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:07.523348    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:07.560090    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:07.697101    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:08.017857    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:08.023917    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:08.060830    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:08.197984    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:08.518702    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:08.524108    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:08.560306    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:08.697470    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:09.018845    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:09.023892    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:09.060242    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:09.196748    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:09.518538    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:09.524171    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:09.560341    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:09.703856    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:10.019084    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:10.023974    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:10.060056    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:10.196204    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:10.518799    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:10.524155    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:10.560023    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:10.695373    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:11.019085    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:11.022989    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:11.060045    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:11.197017    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:11.519088    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:11.523784    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:11.560221    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:11.696089    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:12.018835    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:12.023830    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:12.060114    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:12.195341    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:12.518735    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:12.524076    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:12.560126    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:12.697140    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:13.018766    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:13.027062    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:13.060690    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:13.197284    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:13.518634    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:13.523735    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:13.559911    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:13.696948    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:14.017205    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:14.024343    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:14.060186    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:14.197091    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:14.520832    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:14.523494    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:14.567908    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:14.697368    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:15.018646    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:15.023757    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:15.060014    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:15.196202    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:15.518738    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:15.523879    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:15.559742    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:15.697281    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:16.018640    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:16.023274    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:16.060082    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:16.197187    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:16.516567    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:16.524535    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:16.559955    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:16.697129    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:17.018618    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:17.023759    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:17.059824    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:17.196821    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:17.518516    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:17.523841    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:17.559714    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:17.697040    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:18.018537    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:18.023858    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:18.059952    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:18.195595    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:18.518313    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:18.523722    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:18.559814    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:18.697090    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:19.018930    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:19.023553    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:19.059910    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:19.196410    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:19.516926    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:19.522985    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:19.560080    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:19.696747    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:20.017190    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:20.023890    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:20.058443    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:20.196619    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:20.519998    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:20.523400    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:20.559845    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:20.696779    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:21.018288    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:21.023042    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:21.059983    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:21.196711    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:21.518149    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:21.523724    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:21.559677    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:21.696598    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:22.018473    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:22.023470    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:22.060089    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:22.197466    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:22.518676    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:22.523461    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:22.559754    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:22.696782    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:23.018881    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:23.023756    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:23.059602    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:23.196607    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:23.518498    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:23.523743    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:23.559654    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:23.696723    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:24.017684    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:24.023844    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:24.059812    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:24.196594    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:24.518320    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:24.523663    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:24.559520    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:24.696913    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:25.018889    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:25.023392    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:25.059769    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:25.196909    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:25.518527    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:25.523998    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:25.559566    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:25.696713    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:26.018141    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:26.023190    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:26.058376    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:26.196625    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:26.518423    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:26.523707    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:26.559631    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:26.696811    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:27.018471    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:27.023373    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:27.059568    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:27.196601    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:27.518832    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:27.523391    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:27.559613    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:27.696916    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:28.018477    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:28.023858    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:28.060066    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:28.196900    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:28.518254    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:28.523825    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:28.559813    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:28.695964    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:29.018231    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:29.024068    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:29.059888    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:29.196694    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:29.521818    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:29.529963    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:29.563607    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:29.696809    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:30.018193    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:30.023492    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:30.059754    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:30.196721    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:30.518034    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:30.523500    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:30.559867    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:30.695559    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:31.018015    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:31.022502    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:31.058723    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:31.196626    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:31.517343    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:31.523509    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 09:32:31.557734    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:31.696251    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:32.018432    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:32.023219    1980 kapi.go:107] duration metric: took 57.00251275s to wait for kubernetes.io/minikube-addons=registry ...
	I0307 09:32:32.059080    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:32.196301    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:32.517014    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:32.559436    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:32.694979    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:33.018507    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:33.059254    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:33.196411    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:33.518331    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:33.559274    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:33.696349    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:34.018007    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:34.059485    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:34.196332    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:34.516063    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:34.559704    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:34.696629    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:35.017821    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:35.059476    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:35.196153    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:35.517927    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:35.559714    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:35.696517    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:36.017930    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:36.059235    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:36.194549    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:36.519194    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:36.559382    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:36.696544    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:37.018036    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:37.059414    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:37.196191    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:37.518026    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:37.559177    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:37.696235    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:38.018091    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:38.059190    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:38.196027    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:38.517715    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:38.559332    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:38.695999    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:39.015716    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:39.059433    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:39.195920    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:39.518194    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:39.559243    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:39.696891    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:40.017251    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:40.059352    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:40.194958    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:40.517771    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:40.559103    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:40.696399    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:41.017969    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:41.059231    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:41.196249    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:41.518137    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:41.559101    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:41.696225    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:42.018024    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:42.059175    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:42.195998    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:42.517968    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:42.559284    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:42.696428    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:43.019168    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:43.059139    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:43.195753    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:43.517842    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:43.559041    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:43.695863    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:44.017611    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:44.058936    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:44.195096    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:44.517561    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:44.559451    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:44.695216    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:45.018224    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:45.059100    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:45.195856    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:45.517585    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:45.558894    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:45.695773    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:46.018689    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:46.058992    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:46.193884    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:46.517647    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:46.559379    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:46.696234    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:47.017919    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:47.057226    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:47.195904    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:47.516447    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:47.559245    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:47.696054    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:48.021273    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:48.059272    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:48.195994    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:48.517273    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:48.558904    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:48.695929    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:49.017434    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:49.059268    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:49.196134    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:49.516003    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:49.559013    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:49.695863    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:50.016783    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:50.058884    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:50.196236    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:50.517177    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:50.558791    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:50.695987    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:51.017117    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:51.059286    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:51.198225    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:51.517520    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:51.559418    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:51.699572    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:52.017227    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:52.058843    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:52.195518    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:52.517422    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:52.558820    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:52.695913    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:53.017130    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:53.058954    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:53.195588    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:53.519056    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:53.558627    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:53.695618    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:54.017012    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:54.058940    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:54.195633    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:54.517211    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:54.558940    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:54.695847    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:55.017096    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:55.059814    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:55.195639    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:55.517517    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:55.557798    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:55.695623    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:56.017366    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:56.057397    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:56.194469    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:56.517626    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:56.558705    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:56.695828    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:57.017328    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:57.059232    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:57.194771    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:57.520673    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:57.559109    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:57.695549    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:58.017071    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:58.058401    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:58.196071    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:58.519675    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:58.558675    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:58.693994    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:59.017009    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:59.059019    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:59.195827    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:32:59.517059    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:32:59.557552    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:32:59.695488    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:00.016972    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:00.058587    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:00.195717    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:00.516958    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:00.558405    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:00.696391    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:01.016760    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:01.058482    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:01.195742    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:01.516356    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:01.558552    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:01.695450    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:02.016941    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:02.058479    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:02.195850    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:02.517042    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:02.558464    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:02.696503    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:03.017022    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:03.058770    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:03.195546    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:03.517297    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:03.558579    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:03.695327    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:04.016960    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:04.058470    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:04.195363    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:04.516899    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:04.557143    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:04.695419    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:05.016885    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:05.058154    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:05.194984    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:05.517281    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:05.558279    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:05.695308    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:06.022752    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:06.058224    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:06.195178    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:06.517204    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:06.558783    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:06.695291    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:07.016571    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:07.058751    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:07.195320    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:07.516798    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:07.558549    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:07.695289    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:08.016482    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:08.058934    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:08.195361    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:08.517114    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:08.558505    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:08.695414    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:09.016616    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:09.067793    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:09.195391    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:09.516896    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:09.558343    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:09.695276    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:10.016629    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:10.059097    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:10.195843    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:10.516627    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:10.558166    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:10.694414    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:11.016896    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:11.058500    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:11.195287    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:11.516686    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:11.558249    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:11.695110    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:12.016507    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:12.058385    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:12.194597    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:12.516793    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:12.557626    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:12.695238    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:13.016557    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 09:33:13.058114    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:13.195188    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:13.517009    1980 kapi.go:107] duration metric: took 1m38.508691416s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0307 09:33:13.558237    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:13.694996    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:14.058204    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:14.195106    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:14.558329    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:14.695357    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:15.058172    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:15.195105    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:15.558585    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:15.694963    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:16.057929    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:16.193528    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:16.558140    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:16.695238    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:17.058067    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:17.195403    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:17.558506    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:17.694043    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:18.057965    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:18.195008    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:18.557612    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:18.694998    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:19.058269    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:19.193643    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:19.558615    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:19.694932    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:20.058111    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:20.194312    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:20.558661    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:20.693325    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:21.058017    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:21.195054    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:21.558404    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:21.694875    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:22.058109    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:22.193705    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:22.558495    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:22.695097    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:23.057950    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:23.195199    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:23.558052    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:23.703442    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:24.057889    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:24.194648    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:24.557989    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:24.694821    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:25.057817    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:25.195026    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:25.558227    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:25.694764    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:26.057833    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:26.195028    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:26.558170    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:26.694694    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:27.058198    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:27.195059    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:27.558158    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:27.694701    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:28.057887    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:28.194969    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:28.558052    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:28.694700    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:29.057832    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:29.194999    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:29.557996    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:29.694723    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:30.057806    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:30.194618    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:30.557914    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:30.694733    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:31.057688    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:31.194608    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:31.558034    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:31.694574    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:32.057570    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:32.194514    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:32.557574    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:32.694491    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:33.057684    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:33.194565    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:33.558398    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:33.693715    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:34.057297    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:34.192843    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:34.556818    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:34.694641    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:35.057517    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:35.194517    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:35.558098    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:35.694197    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:36.056263    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:36.194205    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:36.557904    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:36.694549    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:37.056853    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:37.194203    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:37.557876    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:37.694455    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:38.057447    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:38.194004    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:38.558150    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:38.694553    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:39.055894    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:39.194202    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:39.557650    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:39.694581    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:40.057333    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:40.194275    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:40.557255    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:40.694249    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:41.057628    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:41.194180    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:41.557542    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:41.694072    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:42.055760    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:42.193919    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:42.557534    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:42.694372    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:43.057212    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:43.194326    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:43.557677    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:43.693817    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:44.057329    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:44.192541    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:44.557236    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:44.694490    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:45.057085    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:45.194119    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:45.557789    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:45.694455    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:46.057125    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:46.194092    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:46.557655    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:46.693965    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:47.057432    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:47.196142    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:47.557408    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:47.693243    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:48.057290    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:48.193801    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:48.557248    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:48.694390    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:49.057066    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:49.193819    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:49.556965    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:49.694108    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:50.057158    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:50.193852    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:50.557166    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:50.694022    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:51.057164    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:51.194049    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:51.557681    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:51.693724    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:52.057092    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:52.193908    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:52.557087    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:52.694888    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:53.056941    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:53.193954    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:53.557522    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:53.693626    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:54.056990    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:54.193534    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:54.557209    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:54.693884    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:55.056754    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:55.193626    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:55.557150    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:55.693850    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:56.056612    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:56.193601    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:56.556976    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:56.694059    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:57.056673    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:57.193745    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:57.557064    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:57.693856    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:58.056804    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:58.193666    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:58.555603    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:58.693721    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:59.056776    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:59.193777    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:33:59.556633    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:33:59.693929    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:00.056609    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:34:00.194880    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:00.556804    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:34:00.693903    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:01.056791    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:34:01.193617    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:01.556813    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:34:01.692082    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:02.056697    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:34:02.193881    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:02.557005    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:34:02.693709    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:03.056501    1980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 09:34:03.193466    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:03.556758    1980 kapi.go:107] duration metric: took 2m24.503824s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0307 09:34:03.560919    1980 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-040000 cluster.
	I0307 09:34:03.564848    1980 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0307 09:34:03.568827    1980 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0307 09:34:03.693276    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:04.193806    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:04.693669    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:05.193599    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:05.693772    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:06.191734    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:06.693365    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:07.194223    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:07.693781    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:08.193451    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:08.693649    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:09.193268    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:09.693621    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:10.193768    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:10.714887    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:11.193463    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:11.693424    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:12.193289    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:12.693617    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:13.193441    1980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 09:34:13.693490    1980 kapi.go:107] duration metric: took 2m39.0102765s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0307 09:34:13.697546    1980 out.go:177] * Enabled addons: inspektor-gadget, ingress-dns, storage-provisioner, cloud-spanner, yakd, storage-provisioner-rancher, metrics-server, nvidia-device-plugin, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I0307 09:34:13.708717    1980 addons.go:505] duration metric: took 2m42.662533542s for enable addons: enabled=[inspektor-gadget ingress-dns storage-provisioner cloud-spanner yakd storage-provisioner-rancher metrics-server nvidia-device-plugin default-storageclass volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I0307 09:34:13.708731    1980 start.go:245] waiting for cluster config update ...
	I0307 09:34:13.708739    1980 start.go:254] writing updated cluster config ...
	I0307 09:34:13.709741    1980 ssh_runner.go:195] Run: rm -f paused
	I0307 09:34:13.854487    1980 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0307 09:34:13.858655    1980 out.go:177] * Done! kubectl is now configured to use "addons-040000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 07 17:35:16 addons-040000 dockerd[1114]: time="2024-03-07T17:35:16.217024510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 17:35:16 addons-040000 dockerd[1114]: time="2024-03-07T17:35:16.217082186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 17:35:16 addons-040000 cri-dockerd[1004]: time="2024-03-07T17:35:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/da5d3ed44b8e96d13ce6fa7dd595964f3e766e3990888d53ea63b834e177101a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 07 17:35:18 addons-040000 cri-dockerd[1004]: time="2024-03-07T17:35:18Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Mar 07 17:35:18 addons-040000 dockerd[1114]: time="2024-03-07T17:35:18.056816828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 17:35:18 addons-040000 dockerd[1114]: time="2024-03-07T17:35:18.056861335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 17:35:18 addons-040000 dockerd[1114]: time="2024-03-07T17:35:18.056874295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 17:35:18 addons-040000 dockerd[1114]: time="2024-03-07T17:35:18.056908717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 17:35:18 addons-040000 dockerd[1114]: time="2024-03-07T17:35:18.079549848Z" level=info msg="shim disconnected" id=a1a9786d0736cb28c670babf276c1a377033cd051d55e27990ede73a5a708bf2 namespace=moby
	Mar 07 17:35:18 addons-040000 dockerd[1114]: time="2024-03-07T17:35:18.079583687Z" level=warning msg="cleaning up after shim disconnected" id=a1a9786d0736cb28c670babf276c1a377033cd051d55e27990ede73a5a708bf2 namespace=moby
	Mar 07 17:35:18 addons-040000 dockerd[1114]: time="2024-03-07T17:35:18.079588188Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 07 17:35:18 addons-040000 dockerd[1107]: time="2024-03-07T17:35:18.079748463Z" level=info msg="ignoring event" container=a1a9786d0736cb28c670babf276c1a377033cd051d55e27990ede73a5a708bf2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 17:35:18 addons-040000 dockerd[1107]: time="2024-03-07T17:35:18.468215671Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=6761e504cee132bcfd60137f211881977ee4ffb7218145dfa3211565708557a1
	Mar 07 17:35:18 addons-040000 dockerd[1114]: time="2024-03-07T17:35:18.507546573Z" level=info msg="shim disconnected" id=6761e504cee132bcfd60137f211881977ee4ffb7218145dfa3211565708557a1 namespace=moby
	Mar 07 17:35:18 addons-040000 dockerd[1114]: time="2024-03-07T17:35:18.507594247Z" level=warning msg="cleaning up after shim disconnected" id=6761e504cee132bcfd60137f211881977ee4ffb7218145dfa3211565708557a1 namespace=moby
	Mar 07 17:35:18 addons-040000 dockerd[1114]: time="2024-03-07T17:35:18.507598414Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 07 17:35:18 addons-040000 dockerd[1107]: time="2024-03-07T17:35:18.507717433Z" level=info msg="ignoring event" container=6761e504cee132bcfd60137f211881977ee4ffb7218145dfa3211565708557a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 17:35:18 addons-040000 dockerd[1107]: time="2024-03-07T17:35:18.575193172Z" level=info msg="ignoring event" container=1b8f97ec67398d519af4964d9490b5212f37ea58bcd6dc5aacbf27bfc4c0e8fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 17:35:18 addons-040000 dockerd[1114]: time="2024-03-07T17:35:18.575353197Z" level=info msg="shim disconnected" id=1b8f97ec67398d519af4964d9490b5212f37ea58bcd6dc5aacbf27bfc4c0e8fc namespace=moby
	Mar 07 17:35:18 addons-040000 dockerd[1114]: time="2024-03-07T17:35:18.575394912Z" level=warning msg="cleaning up after shim disconnected" id=1b8f97ec67398d519af4964d9490b5212f37ea58bcd6dc5aacbf27bfc4c0e8fc namespace=moby
	Mar 07 17:35:18 addons-040000 dockerd[1114]: time="2024-03-07T17:35:18.575424708Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 07 17:35:19 addons-040000 dockerd[1114]: time="2024-03-07T17:35:19.341184624Z" level=info msg="shim disconnected" id=da5d3ed44b8e96d13ce6fa7dd595964f3e766e3990888d53ea63b834e177101a namespace=moby
	Mar 07 17:35:19 addons-040000 dockerd[1107]: time="2024-03-07T17:35:19.341134366Z" level=info msg="ignoring event" container=da5d3ed44b8e96d13ce6fa7dd595964f3e766e3990888d53ea63b834e177101a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 17:35:19 addons-040000 dockerd[1114]: time="2024-03-07T17:35:19.341415909Z" level=warning msg="cleaning up after shim disconnected" id=da5d3ed44b8e96d13ce6fa7dd595964f3e766e3990888d53ea63b834e177101a namespace=moby
	Mar 07 17:35:19 addons-040000 dockerd[1114]: time="2024-03-07T17:35:19.341421577Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED              STATE               NAME                       ATTEMPT             POD ID              POD
	a1a9786d0736c       busybox@sha256:478209e7be50e5f5c9fd47a6a71d43c119dd44c393160e49dc4bb86f99a439de                                  4 seconds ago        Exited              busybox                    0                   da5d3ed44b8e9       test-local-path
	8246021da41bf       dd1b12fcb6097                                                                                                    11 seconds ago       Exited              hello-world-app            1                   9cba179e5230c       hello-world-app-5d77478584-88dvx
	b8405a18725f2       nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                                    31 seconds ago       Running             nginx                      0                   4ffacee2dcb5e       nginx
	c146f7283e4c7       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32     About a minute ago   Running             gcp-auth                   0                   6dc0b49cc6058       gcp-auth-5f6b4f85fd-xjqgh
	ddfa0ac0a83f1       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246           2 minutes ago        Running             local-path-provisioner     0                   b1717156f914b       local-path-provisioner-78b46b4d5c-wfmvm
	2158ae7e57954       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                            3 minutes ago        Running             yakd                       0                   a403108a5c470       yakd-dashboard-9947fc6bf-944jt
	d34bcc82957ca       nvcr.io/nvidia/k8s-device-plugin@sha256:50aa9517d771e3b0ffa7fded8f1e988dba680a7ff5efce162ce31d1b5ec043e2         3 minutes ago        Running             nvidia-device-plugin-ctr   0                   7d448589e09c4       nvidia-device-plugin-daemonset-pxhm2
	ed0110f3ac56e       gcr.io/cloud-spanner-emulator/emulator@sha256:41d5dccfcf13817a2348beba0ca7c650ffdd795f7fcbe975b7822c9eed262e15   3 minutes ago        Running             cloud-spanner-emulator     0                   724a7ab0b74e4       cloud-spanner-emulator-6548d5df46-mlnhn
	279b5a5cba6ab       ba04bb24b9575                                                                                                    3 minutes ago        Running             storage-provisioner        0                   95ca05feb88e0       storage-provisioner
	49cdb8bf7fc08       97e04611ad434                                                                                                    3 minutes ago        Running             coredns                    0                   d7de64891acb2       coredns-5dd5756b68-nkks4
	784a86c7aef8f       3ca3ca488cf13                                                                                                    3 minutes ago        Running             kube-proxy                 0                   5f376f5e47f1e       kube-proxy-ddvf7
	6bf38d40682ac       04b4c447bb9d4                                                                                                    4 minutes ago        Running             kube-apiserver             0                   88e0a49188d22       kube-apiserver-addons-040000
	e7a7a63f3e983       9cdd6470f48c8                                                                                                    4 minutes ago        Running             etcd                       0                   817b43cb7d6b4       etcd-addons-040000
	8929635b885a0       05c284c929889                                                                                                    4 minutes ago        Running             kube-scheduler             0                   8f1cecf7a5a4e       kube-scheduler-addons-040000
	97fc87d1c0a61       9961cbceaf234                                                                                                    4 minutes ago        Running             kube-controller-manager    0                   060a85dd09e31       kube-controller-manager-addons-040000
	
	
	==> coredns [49cdb8bf7fc0] <==
	[INFO] 10.244.0.20:36429 - 59253 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000015377s
	[INFO] 10.244.0.20:36429 - 49536 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068051s
	[INFO] 10.244.0.20:60222 - 22076 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00001346s
	[INFO] 10.244.0.20:36429 - 11527 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037422s
	[INFO] 10.244.0.20:60222 - 61509 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000022461s
	[INFO] 10.244.0.20:36429 - 42600 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044631s
	[INFO] 10.244.0.20:60222 - 7850 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000015752s
	[INFO] 10.244.0.20:60222 - 60206 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045382s
	[INFO] 10.244.0.20:60222 - 36244 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000051091s
	[INFO] 10.244.0.20:36429 - 50614 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000015169s
	[INFO] 10.244.0.20:39780 - 58138 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000019212s
	[INFO] 10.244.0.20:36429 - 57049 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000013502s
	[INFO] 10.244.0.20:39780 - 4845 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000011669s
	[INFO] 10.244.0.20:39780 - 48960 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011543s
	[INFO] 10.244.0.20:39780 - 63526 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000025296s
	[INFO] 10.244.0.20:39780 - 36746 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000009835s
	[INFO] 10.244.0.20:39780 - 12689 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000009627s
	[INFO] 10.244.0.20:39780 - 25337 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000010501s
	[INFO] 10.244.0.20:39899 - 56065 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000050591s
	[INFO] 10.244.0.20:39899 - 16876 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000014336s
	[INFO] 10.244.0.20:39899 - 33695 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038631s
	[INFO] 10.244.0.20:39899 - 26638 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000016836s
	[INFO] 10.244.0.20:39899 - 18404 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000016253s
	[INFO] 10.244.0.20:39899 - 15828 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000017252s
	[INFO] 10.244.0.20:39899 - 57186 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00003063s
	
	
	==> describe nodes <==
	Name:               addons-040000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-040000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c2be9b5c96c7962d271300acacf405d2402b272f
	                    minikube.k8s.io/name=addons-040000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T09_31_17_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-040000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 17:31:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-040000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 17:35:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 17:35:22 +0000   Thu, 07 Mar 2024 17:31:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 17:35:22 +0000   Thu, 07 Mar 2024 17:31:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 17:35:22 +0000   Thu, 07 Mar 2024 17:31:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 17:35:22 +0000   Thu, 07 Mar 2024 17:31:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-040000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	System Info:
	  Machine ID:                 7bafa7f87c2c4b4db41ab972ca3436ae
	  System UUID:                7bafa7f87c2c4b4db41ab972ca3436ae
	  Boot ID:                    47fa0d07-e473-43f2-bb83-925bceb96527
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-mlnhn                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  default                     hello-world-app-5d77478584-88dvx                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  default                     nginx                                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  gcp-auth                    gcp-auth-5f6b4f85fd-xjqgh                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 coredns-5dd5756b68-nkks4                                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m51s
	  kube-system                 etcd-addons-040000                                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-apiserver-addons-040000                                  250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-controller-manager-addons-040000                         200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-proxy-ddvf7                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-scheduler-addons-040000                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 nvidia-device-plugin-daemonset-pxhm2                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  local-path-storage          helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  local-path-storage          local-path-provisioner-78b46b4d5c-wfmvm                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-944jt                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m50s                  kube-proxy       
	  Normal  Starting                 4m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m10s)  kubelet          Node addons-040000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m10s)  kubelet          Node addons-040000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m10s)  kubelet          Node addons-040000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m6s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m6s                   kubelet          Node addons-040000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s                   kubelet          Node addons-040000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s                   kubelet          Node addons-040000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m3s                   kubelet          Node addons-040000 status is now: NodeReady
	  Normal  RegisteredNode           3m53s                  node-controller  Node addons-040000 event: Registered Node addons-040000 in Controller
	
	
	==> dmesg <==
	[  +2.158952] systemd-fstab-generator[1284]: Ignoring "noauto" option for root device
	[  +2.603804] systemd-fstab-generator[1505]: Ignoring "noauto" option for root device
	[  +0.729484] kauditd_printk_skb: 107 callbacks suppressed
	[  +3.883138] systemd-fstab-generator[2377]: Ignoring "noauto" option for root device
	[ +14.535452] systemd-fstab-generator[3121]: Ignoring "noauto" option for root device
	[  +0.062528] kauditd_printk_skb: 52 callbacks suppressed
	[  +7.237508] kauditd_printk_skb: 261 callbacks suppressed
	[ +11.245074] kauditd_printk_skb: 34 callbacks suppressed
	[Mar 7 17:32] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.488895] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.835386] kauditd_printk_skb: 3 callbacks suppressed
	[  +7.642103] kauditd_printk_skb: 6 callbacks suppressed
	[ +18.238301] kauditd_printk_skb: 11 callbacks suppressed
	[ +14.584147] kauditd_printk_skb: 51 callbacks suppressed
	[Mar 7 17:33] kauditd_printk_skb: 12 callbacks suppressed
	[ +34.002329] kauditd_printk_skb: 2 callbacks suppressed
	[Mar 7 17:34] kauditd_printk_skb: 18 callbacks suppressed
	[ +27.303620] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.003419] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.293568] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.162611] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.215523] kauditd_printk_skb: 19 callbacks suppressed
	[  +8.015557] kauditd_printk_skb: 32 callbacks suppressed
	[Mar 7 17:35] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.280397] kauditd_printk_skb: 20 callbacks suppressed
	
	
	==> etcd [e7a7a63f3e98] <==
	{"level":"info","ts":"2024-03-07T17:31:12.949345Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.2:2380"}
	{"level":"info","ts":"2024-03-07T17:31:12.949549Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"c46d288d2fcb0590","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-03-07T17:31:12.960288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 switched to configuration voters=(14154013790752671120)"}
	{"level":"info","ts":"2024-03-07T17:31:12.960343Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","added-peer-id":"c46d288d2fcb0590","added-peer-peer-urls":["https://192.168.105.2:2380"]}
	{"level":"info","ts":"2024-03-07T17:31:12.960421Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-07T17:31:12.960458Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-07T17:31:12.960467Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-07T17:31:13.38818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-07T17:31:13.388218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-07T17:31:13.388232Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2024-03-07T17:31:13.388248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2024-03-07T17:31:13.388269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-03-07T17:31:13.388277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2024-03-07T17:31:13.388281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-03-07T17:31:13.392688Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T17:31:13.393056Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-040000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-07T17:31:13.39312Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T17:31:13.393688Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-03-07T17:31:13.393937Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T17:31:13.393996Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T17:31:13.394016Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T17:31:13.394266Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T17:31:13.394639Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-07T17:31:13.396408Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-07T17:31:13.396423Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> gcp-auth [c146f7283e4c] <==
	2024/03/07 17:34:03 GCP Auth Webhook started!
	2024/03/07 17:34:24 Ready to marshal response ...
	2024/03/07 17:34:24 Ready to write response ...
	2024/03/07 17:34:24 Ready to marshal response ...
	2024/03/07 17:34:24 Ready to write response ...
	2024/03/07 17:34:45 Ready to marshal response ...
	2024/03/07 17:34:45 Ready to write response ...
	2024/03/07 17:34:47 Ready to marshal response ...
	2024/03/07 17:34:47 Ready to write response ...
	2024/03/07 17:34:59 Ready to marshal response ...
	2024/03/07 17:34:59 Ready to write response ...
	2024/03/07 17:34:59 Ready to marshal response ...
	2024/03/07 17:34:59 Ready to write response ...
	2024/03/07 17:34:59 Ready to marshal response ...
	2024/03/07 17:34:59 Ready to write response ...
	2024/03/07 17:35:20 Ready to marshal response ...
	2024/03/07 17:35:20 Ready to write response ...
	
	
	==> kernel <==
	 17:35:23 up 4 min,  0 users,  load average: 0.76, 0.67, 0.31
	Linux addons-040000 5.10.207 #1 SMP PREEMPT Thu Feb 22 23:40:42 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6bf38d40682a] <==
	I0307 17:34:42.605292       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0307 17:34:43.612207       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0307 17:34:47.880890       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0307 17:34:47.976561       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.236.211"}
	I0307 17:34:59.180891       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.0.143"}
	I0307 17:34:59.608669       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 17:34:59.608687       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 17:34:59.615664       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 17:34:59.615686       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 17:34:59.653276       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 17:34:59.653688       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 17:34:59.654965       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 17:34:59.655029       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 17:34:59.661371       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 17:34:59.661692       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 17:34:59.663334       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 17:34:59.663797       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 17:34:59.673302       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 17:34:59.673326       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 17:34:59.680713       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 17:34:59.680961       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0307 17:35:00.655732       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0307 17:35:00.681737       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0307 17:35:00.686493       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0307 17:35:16.797144       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [97fc87d1c0a6] <==
	E0307 17:35:04.359931       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 17:35:04.663089       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 17:35:04.663106       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 17:35:10.044593       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 17:35:10.044612       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 17:35:10.238653       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 17:35:10.238679       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 17:35:10.338775       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 17:35:10.338787       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0307 17:35:11.227989       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="32.047µs"
	I0307 17:35:12.244387       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.256µs"
	I0307 17:35:13.258109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="53.258µs"
	I0307 17:35:14.907579       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W0307 17:35:15.398512       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 17:35:15.398533       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0307 17:35:15.452778       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0307 17:35:15.454099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="2.667µs"
	I0307 17:35:15.454974       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W0307 17:35:18.947884       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 17:35:18.947901       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0307 17:35:20.349888       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="3.668µs"
	W0307 17:35:20.685999       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 17:35:20.686018       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 17:35:22.796699       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 17:35:22.796718       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [784a86c7aef8] <==
	I0307 17:31:31.752271       1 server_others.go:69] "Using iptables proxy"
	I0307 17:31:31.796837       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0307 17:31:31.901187       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0307 17:31:31.901214       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0307 17:31:31.928375       1 server_others.go:152] "Using iptables Proxier"
	I0307 17:31:31.928427       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0307 17:31:31.928529       1 server.go:846] "Version info" version="v1.28.4"
	I0307 17:31:31.928535       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 17:31:31.929342       1 config.go:188] "Starting service config controller"
	I0307 17:31:31.929360       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0307 17:31:31.929374       1 config.go:97] "Starting endpoint slice config controller"
	I0307 17:31:31.929379       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0307 17:31:31.929543       1 config.go:315] "Starting node config controller"
	I0307 17:31:31.929546       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0307 17:31:32.030084       1 shared_informer.go:318] Caches are synced for node config
	I0307 17:31:32.030100       1 shared_informer.go:318] Caches are synced for service config
	I0307 17:31:32.030123       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8929635b885a] <==
	W0307 17:31:14.019791       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0307 17:31:14.019829       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0307 17:31:14.019778       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0307 17:31:14.019835       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0307 17:31:14.019642       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0307 17:31:14.019847       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0307 17:31:14.019767       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0307 17:31:14.019857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0307 17:31:14.019964       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0307 17:31:14.019973       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0307 17:31:14.019966       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0307 17:31:14.019978       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0307 17:31:14.020029       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0307 17:31:14.020040       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0307 17:31:14.020101       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0307 17:31:14.020111       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0307 17:31:14.020202       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0307 17:31:14.020211       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0307 17:31:14.900347       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0307 17:31:14.900364       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0307 17:31:14.958262       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0307 17:31:14.958278       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0307 17:31:15.000034       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0307 17:31:15.000052       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0307 17:31:15.516662       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 07 17:35:20 addons-040000 kubelet[2384]: E0307 17:35:20.120354    2384 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="05579015-4b13-4c6d-a462-eb2c7db03546" containerName="minikube-ingress-dns"
	Mar 07 17:35:20 addons-040000 kubelet[2384]: E0307 17:35:20.120375    2384 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1907aac4-6f9d-4cf5-8255-75931fce2d14" containerName="busybox"
	Mar 07 17:35:20 addons-040000 kubelet[2384]: E0307 17:35:20.120390    2384 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="05579015-4b13-4c6d-a462-eb2c7db03546" containerName="minikube-ingress-dns"
	Mar 07 17:35:20 addons-040000 kubelet[2384]: I0307 17:35:20.120429    2384 memory_manager.go:346] "RemoveStaleState removing state" podUID="1907aac4-6f9d-4cf5-8255-75931fce2d14" containerName="busybox"
	Mar 07 17:35:20 addons-040000 kubelet[2384]: I0307 17:35:20.120444    2384 memory_manager.go:346] "RemoveStaleState removing state" podUID="05579015-4b13-4c6d-a462-eb2c7db03546" containerName="minikube-ingress-dns"
	Mar 07 17:35:20 addons-040000 kubelet[2384]: I0307 17:35:20.120458    2384 memory_manager.go:346] "RemoveStaleState removing state" podUID="05579015-4b13-4c6d-a462-eb2c7db03546" containerName="minikube-ingress-dns"
	Mar 07 17:35:20 addons-040000 kubelet[2384]: I0307 17:35:20.285909    2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fc2fa75b-7ad5-41f6-b2b8-817bac562e18-gcp-creds\") pod \"helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6\" (UID: \"fc2fa75b-7ad5-41f6-b2b8-817bac562e18\") " pod="local-path-storage/helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6"
	Mar 07 17:35:20 addons-040000 kubelet[2384]: I0307 17:35:20.285937    2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zncwr\" (UniqueName: \"kubernetes.io/projected/fc2fa75b-7ad5-41f6-b2b8-817bac562e18-kube-api-access-zncwr\") pod \"helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6\" (UID: \"fc2fa75b-7ad5-41f6-b2b8-817bac562e18\") " pod="local-path-storage/helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6"
	Mar 07 17:35:20 addons-040000 kubelet[2384]: I0307 17:35:20.285950    2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/fc2fa75b-7ad5-41f6-b2b8-817bac562e18-script\") pod \"helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6\" (UID: \"fc2fa75b-7ad5-41f6-b2b8-817bac562e18\") " pod="local-path-storage/helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6"
	Mar 07 17:35:20 addons-040000 kubelet[2384]: I0307 17:35:20.285960    2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/fc2fa75b-7ad5-41f6-b2b8-817bac562e18-data\") pod \"helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6\" (UID: \"fc2fa75b-7ad5-41f6-b2b8-817bac562e18\") " pod="local-path-storage/helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6"
	Mar 07 17:35:20 addons-040000 kubelet[2384]: I0307 17:35:20.306936    2384 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da5d3ed44b8e96d13ce6fa7dd595964f3e766e3990888d53ea63b834e177101a"
	Mar 07 17:35:20 addons-040000 kubelet[2384]: E0307 17:35:20.386337    2384 configmap.go:199] Couldn't get configMap local-path-storage/local-path-config: configmap "local-path-config" not found
	Mar 07 17:35:20 addons-040000 kubelet[2384]: E0307 17:35:20.386363    2384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc2fa75b-7ad5-41f6-b2b8-817bac562e18-script podName:fc2fa75b-7ad5-41f6-b2b8-817bac562e18 nodeName:}" failed. No retries permitted until 2024-03-07 17:35:20.886354657 +0000 UTC m=+244.218664523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "script" (UniqueName: "kubernetes.io/configmap/fc2fa75b-7ad5-41f6-b2b8-817bac562e18-script") pod "helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6" (UID: "fc2fa75b-7ad5-41f6-b2b8-817bac562e18") : configmap "local-path-config" not found
	Mar 07 17:35:20 addons-040000 kubelet[2384]: E0307 17:35:20.387076    2384 projected.go:198] Error preparing data for projected volume kube-api-access-zncwr for pod local-path-storage/helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6: failed to fetch token: serviceaccounts "local-path-provisioner-service-account" is forbidden: unable to create new content in namespace local-path-storage because it is being terminated
	Mar 07 17:35:20 addons-040000 kubelet[2384]: E0307 17:35:20.387103    2384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc2fa75b-7ad5-41f6-b2b8-817bac562e18-kube-api-access-zncwr podName:fc2fa75b-7ad5-41f6-b2b8-817bac562e18 nodeName:}" failed. No retries permitted until 2024-03-07 17:35:20.887097146 +0000 UTC m=+244.219406971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zncwr" (UniqueName: "kubernetes.io/projected/fc2fa75b-7ad5-41f6-b2b8-817bac562e18-kube-api-access-zncwr") pod "helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6" (UID: "fc2fa75b-7ad5-41f6-b2b8-817bac562e18") : failed to fetch token: serviceaccounts "local-path-provisioner-service-account" is forbidden: unable to create new content in namespace local-path-storage because it is being terminated
	Mar 07 17:35:20 addons-040000 kubelet[2384]: I0307 17:35:20.749356    2384 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1907aac4-6f9d-4cf5-8255-75931fce2d14" path="/var/lib/kubelet/pods/1907aac4-6f9d-4cf5-8255-75931fce2d14/volumes"
	Mar 07 17:35:20 addons-040000 kubelet[2384]: I0307 17:35:20.749551    2384 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ec63cdcd-993d-43f0-ab41-29384e6b7831" path="/var/lib/kubelet/pods/ec63cdcd-993d-43f0-ab41-29384e6b7831/volumes"
	Mar 07 17:35:20 addons-040000 kubelet[2384]: E0307 17:35:20.888126    2384 configmap.go:199] Couldn't get configMap local-path-storage/local-path-config: configmap "local-path-config" not found
	Mar 07 17:35:20 addons-040000 kubelet[2384]: E0307 17:35:20.888182    2384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc2fa75b-7ad5-41f6-b2b8-817bac562e18-script podName:fc2fa75b-7ad5-41f6-b2b8-817bac562e18 nodeName:}" failed. No retries permitted until 2024-03-07 17:35:21.888170973 +0000 UTC m=+245.220480839 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "script" (UniqueName: "kubernetes.io/configmap/fc2fa75b-7ad5-41f6-b2b8-817bac562e18-script") pod "helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6" (UID: "fc2fa75b-7ad5-41f6-b2b8-817bac562e18") : configmap "local-path-config" not found
	Mar 07 17:35:20 addons-040000 kubelet[2384]: E0307 17:35:20.889168    2384 projected.go:198] Error preparing data for projected volume kube-api-access-zncwr for pod local-path-storage/helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6: failed to fetch token: serviceaccounts "local-path-provisioner-service-account" is forbidden: unable to create new content in namespace local-path-storage because it is being terminated
	Mar 07 17:35:20 addons-040000 kubelet[2384]: E0307 17:35:20.889194    2384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc2fa75b-7ad5-41f6-b2b8-817bac562e18-kube-api-access-zncwr podName:fc2fa75b-7ad5-41f6-b2b8-817bac562e18 nodeName:}" failed. No retries permitted until 2024-03-07 17:35:21.889187879 +0000 UTC m=+245.221497745 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zncwr" (UniqueName: "kubernetes.io/projected/fc2fa75b-7ad5-41f6-b2b8-817bac562e18-kube-api-access-zncwr") pod "helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6" (UID: "fc2fa75b-7ad5-41f6-b2b8-817bac562e18") : failed to fetch token: serviceaccounts "local-path-provisioner-service-account" is forbidden: unable to create new content in namespace local-path-storage because it is being terminated
	Mar 07 17:35:21 addons-040000 kubelet[2384]: E0307 17:35:21.892788    2384 configmap.go:199] Couldn't get configMap local-path-storage/local-path-config: configmap "local-path-config" not found
	Mar 07 17:35:21 addons-040000 kubelet[2384]: E0307 17:35:21.892847    2384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc2fa75b-7ad5-41f6-b2b8-817bac562e18-script podName:fc2fa75b-7ad5-41f6-b2b8-817bac562e18 nodeName:}" failed. No retries permitted until 2024-03-07 17:35:23.892834292 +0000 UTC m=+247.225144158 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "script" (UniqueName: "kubernetes.io/configmap/fc2fa75b-7ad5-41f6-b2b8-817bac562e18-script") pod "helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6" (UID: "fc2fa75b-7ad5-41f6-b2b8-817bac562e18") : configmap "local-path-config" not found
	Mar 07 17:35:21 addons-040000 kubelet[2384]: E0307 17:35:21.894283    2384 projected.go:198] Error preparing data for projected volume kube-api-access-zncwr for pod local-path-storage/helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6: failed to fetch token: serviceaccounts "local-path-provisioner-service-account" is forbidden: unable to create new content in namespace local-path-storage because it is being terminated
	Mar 07 17:35:21 addons-040000 kubelet[2384]: E0307 17:35:21.894310    2384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc2fa75b-7ad5-41f6-b2b8-817bac562e18-kube-api-access-zncwr podName:fc2fa75b-7ad5-41f6-b2b8-817bac562e18 nodeName:}" failed. No retries permitted until 2024-03-07 17:35:23.894303185 +0000 UTC m=+247.226613010 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zncwr" (UniqueName: "kubernetes.io/projected/fc2fa75b-7ad5-41f6-b2b8-817bac562e18-kube-api-access-zncwr") pod "helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6" (UID: "fc2fa75b-7ad5-41f6-b2b8-817bac562e18") : failed to fetch token: serviceaccounts "local-path-provisioner-service-account" is forbidden: unable to create new content in namespace local-path-storage because it is being terminated
	
	
	==> storage-provisioner [279b5a5cba6a] <==
	I0307 17:31:34.699395       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0307 17:31:34.740786       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0307 17:31:34.740825       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0307 17:31:34.748796       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0307 17:31:34.748880       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-040000_16777199-f1ed-4558-b1c8-7cede07c9f7c!
	I0307 17:31:34.749357       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cb934aeb-ff28-419c-8b92-bea6c422a1b9", APIVersion:"v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-040000_16777199-f1ed-4558-b1c8-7cede07c9f7c became leader
	I0307 17:31:34.849358       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-040000_16777199-f1ed-4558-b1c8-7cede07c9f7c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-040000 -n addons-040000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-040000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-040000 describe pod helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-040000 describe pod helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6: exit status 1 (40.393542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-040000 describe pod helper-pod-delete-pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6: exit status 1
--- FAIL: TestAddons/parallel/Ingress (35.68s)

                                                
                                    
x
+
TestCertOptions (10.09s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-521000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
E0307 10:10:16.479747    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-521000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.798278083s)

                                                
                                                
-- stdout --
	* [cert-options-521000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-521000" primary control-plane node in "cert-options-521000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-521000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-521000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-521000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-521000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-521000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.98175ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-521000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-521000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-521000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-521000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-521000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-521000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (43.089708ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-521000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-521000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-521000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-521000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-521000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-03-07 10:10:25.472005 -0800 PST m=+2489.295227084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-521000 -n cert-options-521000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-521000 -n cert-options-521000: exit status 7 (32.347625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-521000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-521000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-521000
--- FAIL: TestCertOptions (10.09s)

                                                
                                    
x
+
TestCertExpiration (195.27s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-259000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-259000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.871174334s)

                                                
                                                
-- stdout --
	* [cert-expiration-259000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-259000" primary control-plane node in "cert-expiration-259000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-259000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-259000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-259000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-259000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-259000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.234768708s)

                                                
                                                
-- stdout --
	* [cert-expiration-259000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-259000" primary control-plane node in "cert-expiration-259000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-259000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-259000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-259000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-259000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-259000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-259000" primary control-plane node in "cert-expiration-259000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-259000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-259000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-259000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-03-07 10:13:25.559442 -0800 PST m=+2669.388610584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-259000 -n cert-expiration-259000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-259000 -n cert-expiration-259000: exit status 7 (61.656458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-259000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-259000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-259000
--- FAIL: TestCertExpiration (195.27s)

                                                
                                    
x
+
TestDockerFlags (10.01s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-256000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-256000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.743374042s)

                                                
                                                
-- stdout --
	* [docker-flags-256000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-256000" primary control-plane node in "docker-flags-256000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-256000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:10:05.540302    4117 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:10:05.540442    4117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:10:05.540446    4117 out.go:304] Setting ErrFile to fd 2...
	I0307 10:10:05.540449    4117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:10:05.540558    4117 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:10:05.541565    4117 out.go:298] Setting JSON to false
	I0307 10:10:05.557348    4117 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4177,"bootTime":1709830828,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:10:05.557418    4117 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:10:05.561092    4117 out.go:177] * [docker-flags-256000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:10:05.570040    4117 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:10:05.574049    4117 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:10:05.570131    4117 notify.go:220] Checking for updates...
	I0307 10:10:05.577039    4117 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:10:05.580087    4117 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:10:05.583004    4117 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:10:05.586090    4117 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:10:05.589443    4117 config.go:182] Loaded profile config "force-systemd-flag-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:10:05.589505    4117 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:10:05.589550    4117 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:10:05.594036    4117 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:10:05.601100    4117 start.go:297] selected driver: qemu2
	I0307 10:10:05.601106    4117 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:10:05.601112    4117 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:10:05.603388    4117 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:10:05.607082    4117 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:10:05.610166    4117 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0307 10:10:05.610218    4117 cni.go:84] Creating CNI manager for ""
	I0307 10:10:05.610226    4117 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:10:05.610230    4117 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 10:10:05.610266    4117 start.go:340] cluster config:
	{Name:docker-flags-256000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:10:05.614599    4117 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:10:05.622059    4117 out.go:177] * Starting "docker-flags-256000" primary control-plane node in "docker-flags-256000" cluster
	I0307 10:10:05.626058    4117 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:10:05.626071    4117 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:10:05.626080    4117 cache.go:56] Caching tarball of preloaded images
	I0307 10:10:05.626131    4117 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:10:05.626137    4117 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:10:05.626201    4117 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/docker-flags-256000/config.json ...
	I0307 10:10:05.626213    4117 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/docker-flags-256000/config.json: {Name:mk1db5a760d0c8360a158531086afdc7ff215826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:10:05.626429    4117 start.go:360] acquireMachinesLock for docker-flags-256000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:10:05.626463    4117 start.go:364] duration metric: took 27.167µs to acquireMachinesLock for "docker-flags-256000"
	I0307 10:10:05.626474    4117 start.go:93] Provisioning new machine with config: &{Name:docker-flags-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:10:05.626506    4117 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:10:05.635075    4117 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 10:10:05.653144    4117 start.go:159] libmachine.API.Create for "docker-flags-256000" (driver="qemu2")
	I0307 10:10:05.653181    4117 client.go:168] LocalClient.Create starting
	I0307 10:10:05.653256    4117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:10:05.653288    4117 main.go:141] libmachine: Decoding PEM data...
	I0307 10:10:05.653303    4117 main.go:141] libmachine: Parsing certificate...
	I0307 10:10:05.653350    4117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:10:05.653373    4117 main.go:141] libmachine: Decoding PEM data...
	I0307 10:10:05.653380    4117 main.go:141] libmachine: Parsing certificate...
	I0307 10:10:05.653798    4117 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:10:05.791099    4117 main.go:141] libmachine: Creating SSH key...
	I0307 10:10:05.864732    4117 main.go:141] libmachine: Creating Disk image...
	I0307 10:10:05.864737    4117 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:10:05.864923    4117 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/docker-flags-256000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/docker-flags-256000/disk.qcow2
	I0307 10:10:05.877204    4117 main.go:141] libmachine: STDOUT: 
	I0307 10:10:05.877221    4117 main.go:141] libmachine: STDERR: 
	I0307 10:10:05.877270    4117 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/docker-flags-256000/disk.qcow2 +20000M
	I0307 10:10:05.888114    4117 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:10:05.888133    4117 main.go:141] libmachine: STDERR: 
	I0307 10:10:05.888150    4117 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/docker-flags-256000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/docker-flags-256000/disk.qcow2
	I0307 10:10:05.888154    4117 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:10:05.888189    4117 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/docker-flags-256000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/docker-flags-256000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/docker-flags-256000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:22:e4:97:ad:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/docker-flags-256000/disk.qcow2
	I0307 10:10:05.890188    4117 main.go:141] libmachine: STDOUT: 
	I0307 10:10:05.890207    4117 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:10:05.890221    4117 client.go:171] duration metric: took 237.041709ms to LocalClient.Create
	I0307 10:10:07.892310    4117 start.go:128] duration metric: took 2.265857709s to createHost
	I0307 10:10:07.892392    4117 start.go:83] releasing machines lock for "docker-flags-256000", held for 2.265975209s
	W0307 10:10:07.892468    4117 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:10:07.902122    4117 out.go:177] * Deleting "docker-flags-256000" in qemu2 ...
	W0307 10:10:07.929306    4117 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:10:07.929346    4117 start.go:728] Will try again in 5 seconds ...
	I0307 10:10:12.931314    4117 start.go:360] acquireMachinesLock for docker-flags-256000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:10:12.931695    4117 start.go:364] duration metric: took 307.375µs to acquireMachinesLock for "docker-flags-256000"
	I0307 10:10:12.931821    4117 start.go:93] Provisioning new machine with config: &{Name:docker-flags-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:10:12.931987    4117 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:10:12.940745    4117 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 10:10:12.983415    4117 start.go:159] libmachine.API.Create for "docker-flags-256000" (driver="qemu2")
	I0307 10:10:12.983465    4117 client.go:168] LocalClient.Create starting
	I0307 10:10:12.983577    4117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:10:12.983631    4117 main.go:141] libmachine: Decoding PEM data...
	I0307 10:10:12.983648    4117 main.go:141] libmachine: Parsing certificate...
	I0307 10:10:12.983719    4117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:10:12.983761    4117 main.go:141] libmachine: Decoding PEM data...
	I0307 10:10:12.983773    4117 main.go:141] libmachine: Parsing certificate...
	I0307 10:10:12.984269    4117 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:10:13.130157    4117 main.go:141] libmachine: Creating SSH key...
	I0307 10:10:13.182265    4117 main.go:141] libmachine: Creating Disk image...
	I0307 10:10:13.182270    4117 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:10:13.182430    4117 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/docker-flags-256000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/docker-flags-256000/disk.qcow2
	I0307 10:10:13.194866    4117 main.go:141] libmachine: STDOUT: 
	I0307 10:10:13.194883    4117 main.go:141] libmachine: STDERR: 
	I0307 10:10:13.194936    4117 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/docker-flags-256000/disk.qcow2 +20000M
	I0307 10:10:13.205577    4117 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:10:13.205594    4117 main.go:141] libmachine: STDERR: 
	I0307 10:10:13.205607    4117 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/docker-flags-256000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/docker-flags-256000/disk.qcow2
	I0307 10:10:13.205614    4117 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:10:13.205641    4117 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/docker-flags-256000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/docker-flags-256000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/docker-flags-256000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:ad:d6:86:84:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/docker-flags-256000/disk.qcow2
	I0307 10:10:13.207338    4117 main.go:141] libmachine: STDOUT: 
	I0307 10:10:13.207353    4117 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:10:13.207366    4117 client.go:171] duration metric: took 223.901709ms to LocalClient.Create
	I0307 10:10:15.209462    4117 start.go:128] duration metric: took 2.277525459s to createHost
	I0307 10:10:15.209521    4117 start.go:83] releasing machines lock for "docker-flags-256000", held for 2.277863875s
	W0307 10:10:15.209859    4117 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-256000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-256000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:10:15.221519    4117 out.go:177] 
	W0307 10:10:15.224660    4117 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:10:15.224700    4117 out.go:239] * 
	* 
	W0307 10:10:15.227447    4117 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:10:15.238525    4117 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-256000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-256000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-256000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (76.937167ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-256000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-256000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-256000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-256000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-256000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-256000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-256000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-256000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-256000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (46.725334ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-256000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-256000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-256000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-256000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-256000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-256000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-03-07 10:10:15.380479 -0800 PST m=+2479.203368084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-256000 -n docker-flags-256000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-256000 -n docker-flags-256000: exit status 7 (30.971542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-256000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-256000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-256000
--- FAIL: TestDockerFlags (10.01s)

                                                
                                    
x
+
TestForceSystemdFlag (9.96s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-434000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-434000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.748323792s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-434000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-434000" primary control-plane node in "force-systemd-flag-434000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-434000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:10:00.521782    4095 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:10:00.521909    4095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:10:00.521912    4095 out.go:304] Setting ErrFile to fd 2...
	I0307 10:10:00.521915    4095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:10:00.522041    4095 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:10:00.523082    4095 out.go:298] Setting JSON to false
	I0307 10:10:00.538830    4095 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4172,"bootTime":1709830828,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:10:00.538904    4095 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:10:00.545132    4095 out.go:177] * [force-systemd-flag-434000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:10:00.552045    4095 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:10:00.556999    4095 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:10:00.552099    4095 notify.go:220] Checking for updates...
	I0307 10:10:00.568002    4095 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:10:00.571069    4095 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:10:00.574047    4095 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:10:00.577010    4095 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:10:00.580376    4095 config.go:182] Loaded profile config "force-systemd-env-411000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:10:00.580451    4095 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:10:00.580499    4095 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:10:00.585016    4095 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:10:00.592018    4095 start.go:297] selected driver: qemu2
	I0307 10:10:00.592023    4095 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:10:00.592029    4095 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:10:00.594432    4095 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:10:00.598049    4095 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:10:00.601037    4095 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 10:10:00.601075    4095 cni.go:84] Creating CNI manager for ""
	I0307 10:10:00.601083    4095 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:10:00.601088    4095 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 10:10:00.601131    4095 start.go:340] cluster config:
	{Name:force-systemd-flag-434000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:10:00.605823    4095 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:10:00.612868    4095 out.go:177] * Starting "force-systemd-flag-434000" primary control-plane node in "force-systemd-flag-434000" cluster
	I0307 10:10:00.617038    4095 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:10:00.617053    4095 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:10:00.617064    4095 cache.go:56] Caching tarball of preloaded images
	I0307 10:10:00.617122    4095 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:10:00.617129    4095 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:10:00.617208    4095 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/force-systemd-flag-434000/config.json ...
	I0307 10:10:00.617221    4095 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/force-systemd-flag-434000/config.json: {Name:mkdca8cf29a00e4f18b1da1da6f227a0a238960d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:10:00.617467    4095 start.go:360] acquireMachinesLock for force-systemd-flag-434000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:10:00.617506    4095 start.go:364] duration metric: took 30.292µs to acquireMachinesLock for "force-systemd-flag-434000"
	I0307 10:10:00.617522    4095 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-434000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:10:00.617558    4095 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:10:00.624016    4095 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 10:10:00.643185    4095 start.go:159] libmachine.API.Create for "force-systemd-flag-434000" (driver="qemu2")
	I0307 10:10:00.643225    4095 client.go:168] LocalClient.Create starting
	I0307 10:10:00.643290    4095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:10:00.643325    4095 main.go:141] libmachine: Decoding PEM data...
	I0307 10:10:00.643333    4095 main.go:141] libmachine: Parsing certificate...
	I0307 10:10:00.643384    4095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:10:00.643407    4095 main.go:141] libmachine: Decoding PEM data...
	I0307 10:10:00.643414    4095 main.go:141] libmachine: Parsing certificate...
	I0307 10:10:00.643802    4095 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:10:00.779562    4095 main.go:141] libmachine: Creating SSH key...
	I0307 10:10:00.873899    4095 main.go:141] libmachine: Creating Disk image...
	I0307 10:10:00.873913    4095 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:10:00.874083    4095 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-flag-434000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-flag-434000/disk.qcow2
	I0307 10:10:00.886165    4095 main.go:141] libmachine: STDOUT: 
	I0307 10:10:00.886185    4095 main.go:141] libmachine: STDERR: 
	I0307 10:10:00.886249    4095 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-flag-434000/disk.qcow2 +20000M
	I0307 10:10:00.896847    4095 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:10:00.896864    4095 main.go:141] libmachine: STDERR: 
	I0307 10:10:00.896878    4095 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-flag-434000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-flag-434000/disk.qcow2
	I0307 10:10:00.896884    4095 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:10:00.896923    4095 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-flag-434000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-flag-434000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-flag-434000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:93:ae:e4:ae:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-flag-434000/disk.qcow2
	I0307 10:10:00.898682    4095 main.go:141] libmachine: STDOUT: 
	I0307 10:10:00.898695    4095 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:10:00.898723    4095 client.go:171] duration metric: took 255.502583ms to LocalClient.Create
	I0307 10:10:02.900897    4095 start.go:128] duration metric: took 2.2833875s to createHost
	I0307 10:10:02.900995    4095 start.go:83] releasing machines lock for "force-systemd-flag-434000", held for 2.283552667s
	W0307 10:10:02.901052    4095 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:10:02.912756    4095 out.go:177] * Deleting "force-systemd-flag-434000" in qemu2 ...
	W0307 10:10:02.939512    4095 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:10:02.939531    4095 start.go:728] Will try again in 5 seconds ...
	I0307 10:10:07.941499    4095 start.go:360] acquireMachinesLock for force-systemd-flag-434000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:10:07.941838    4095 start.go:364] duration metric: took 261.708µs to acquireMachinesLock for "force-systemd-flag-434000"
	I0307 10:10:07.941951    4095 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-434000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:10:07.942163    4095 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:10:07.950004    4095 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 10:10:07.992480    4095 start.go:159] libmachine.API.Create for "force-systemd-flag-434000" (driver="qemu2")
	I0307 10:10:07.992532    4095 client.go:168] LocalClient.Create starting
	I0307 10:10:07.992618    4095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:10:07.992671    4095 main.go:141] libmachine: Decoding PEM data...
	I0307 10:10:07.992689    4095 main.go:141] libmachine: Parsing certificate...
	I0307 10:10:07.992754    4095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:10:07.992790    4095 main.go:141] libmachine: Decoding PEM data...
	I0307 10:10:07.992798    4095 main.go:141] libmachine: Parsing certificate...
	I0307 10:10:07.993245    4095 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:10:08.137399    4095 main.go:141] libmachine: Creating SSH key...
	I0307 10:10:08.168489    4095 main.go:141] libmachine: Creating Disk image...
	I0307 10:10:08.168494    4095 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:10:08.168651    4095 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-flag-434000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-flag-434000/disk.qcow2
	I0307 10:10:08.180816    4095 main.go:141] libmachine: STDOUT: 
	I0307 10:10:08.180838    4095 main.go:141] libmachine: STDERR: 
	I0307 10:10:08.180905    4095 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-flag-434000/disk.qcow2 +20000M
	I0307 10:10:08.191676    4095 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:10:08.191693    4095 main.go:141] libmachine: STDERR: 
	I0307 10:10:08.191708    4095 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-flag-434000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-flag-434000/disk.qcow2
	I0307 10:10:08.191711    4095 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:10:08.191741    4095 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-flag-434000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-flag-434000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-flag-434000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:37:d4:b4:3d:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-flag-434000/disk.qcow2
	I0307 10:10:08.193557    4095 main.go:141] libmachine: STDOUT: 
	I0307 10:10:08.193574    4095 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:10:08.193585    4095 client.go:171] duration metric: took 201.055458ms to LocalClient.Create
	I0307 10:10:10.195698    4095 start.go:128] duration metric: took 2.253581667s to createHost
	I0307 10:10:10.195756    4095 start.go:83] releasing machines lock for "force-systemd-flag-434000", held for 2.253970666s
	W0307 10:10:10.196076    4095 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-434000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-434000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:10:10.209735    4095 out.go:177] 
	W0307 10:10:10.212872    4095 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:10:10.212903    4095 out.go:239] * 
	* 
	W0307 10:10:10.215116    4095 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:10:10.225648    4095 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-434000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-434000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-434000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.495584ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-434000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-434000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-434000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-03-07 10:10:10.322294 -0800 PST m=+2474.145015876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-434000 -n force-systemd-flag-434000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-434000 -n force-systemd-flag-434000: exit status 7 (36.477875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-434000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-434000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-434000
--- FAIL: TestForceSystemdFlag (9.96s)

                                                
                                    
x
+
TestForceSystemdEnv (10.09s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-411000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-411000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.874784875s)

                                                
                                                
-- stdout --
	* [force-systemd-env-411000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-411000" primary control-plane node in "force-systemd-env-411000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-411000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:09:55.447178    4062 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:09:55.447286    4062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:09:55.447290    4062 out.go:304] Setting ErrFile to fd 2...
	I0307 10:09:55.447292    4062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:09:55.447407    4062 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:09:55.448459    4062 out.go:298] Setting JSON to false
	I0307 10:09:55.464935    4062 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4167,"bootTime":1709830828,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:09:55.464997    4062 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:09:55.469702    4062 out.go:177] * [force-systemd-env-411000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:09:55.481688    4062 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:09:55.477775    4062 notify.go:220] Checking for updates...
	I0307 10:09:55.490812    4062 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:09:55.498623    4062 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:09:55.506551    4062 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:09:55.514670    4062 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:09:55.521672    4062 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0307 10:09:55.526113    4062 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:09:55.526162    4062 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:09:55.530682    4062 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:09:55.537648    4062 start.go:297] selected driver: qemu2
	I0307 10:09:55.537653    4062 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:09:55.537659    4062 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:09:55.540015    4062 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:09:55.543595    4062 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:09:55.547782    4062 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 10:09:55.547824    4062 cni.go:84] Creating CNI manager for ""
	I0307 10:09:55.547833    4062 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:09:55.547840    4062 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 10:09:55.547863    4062 start.go:340] cluster config:
	{Name:force-systemd-env-411000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:09:55.552106    4062 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:09:55.558653    4062 out.go:177] * Starting "force-systemd-env-411000" primary control-plane node in "force-systemd-env-411000" cluster
	I0307 10:09:55.562731    4062 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:09:55.562753    4062 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:09:55.562762    4062 cache.go:56] Caching tarball of preloaded images
	I0307 10:09:55.562826    4062 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:09:55.562832    4062 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:09:55.562888    4062 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/force-systemd-env-411000/config.json ...
	I0307 10:09:55.562899    4062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/force-systemd-env-411000/config.json: {Name:mk8a2444e1ed8be0ac3ae954f5fbd4501917f4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:09:55.563156    4062 start.go:360] acquireMachinesLock for force-systemd-env-411000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:09:55.563193    4062 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "force-systemd-env-411000"
	I0307 10:09:55.563206    4062 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:09:55.563234    4062 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:09:55.567663    4062 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 10:09:55.584357    4062 start.go:159] libmachine.API.Create for "force-systemd-env-411000" (driver="qemu2")
	I0307 10:09:55.584388    4062 client.go:168] LocalClient.Create starting
	I0307 10:09:55.584446    4062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:09:55.584471    4062 main.go:141] libmachine: Decoding PEM data...
	I0307 10:09:55.584481    4062 main.go:141] libmachine: Parsing certificate...
	I0307 10:09:55.584524    4062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:09:55.584548    4062 main.go:141] libmachine: Decoding PEM data...
	I0307 10:09:55.584554    4062 main.go:141] libmachine: Parsing certificate...
	I0307 10:09:55.584866    4062 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:09:55.723488    4062 main.go:141] libmachine: Creating SSH key...
	I0307 10:09:55.810699    4062 main.go:141] libmachine: Creating Disk image...
	I0307 10:09:55.810705    4062 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:09:55.810869    4062 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-env-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-env-411000/disk.qcow2
	I0307 10:09:55.823407    4062 main.go:141] libmachine: STDOUT: 
	I0307 10:09:55.823433    4062 main.go:141] libmachine: STDERR: 
	I0307 10:09:55.823511    4062 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-env-411000/disk.qcow2 +20000M
	I0307 10:09:55.835000    4062 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:09:55.835016    4062 main.go:141] libmachine: STDERR: 
	I0307 10:09:55.835040    4062 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-env-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-env-411000/disk.qcow2
	I0307 10:09:55.835044    4062 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:09:55.835074    4062 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-env-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-env-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-env-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:88:d3:85:03:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-env-411000/disk.qcow2
	I0307 10:09:55.836884    4062 main.go:141] libmachine: STDOUT: 
	I0307 10:09:55.836900    4062 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:09:55.836920    4062 client.go:171] duration metric: took 252.534625ms to LocalClient.Create
	I0307 10:09:57.837660    4062 start.go:128] duration metric: took 2.274457542s to createHost
	I0307 10:09:57.837760    4062 start.go:83] releasing machines lock for "force-systemd-env-411000", held for 2.274629958s
	W0307 10:09:57.837844    4062 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:09:57.844277    4062 out.go:177] * Deleting "force-systemd-env-411000" in qemu2 ...
	W0307 10:09:57.871193    4062 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:09:57.871232    4062 start.go:728] Will try again in 5 seconds ...
	I0307 10:10:02.873242    4062 start.go:360] acquireMachinesLock for force-systemd-env-411000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:10:02.901408    4062 start.go:364] duration metric: took 28.04225ms to acquireMachinesLock for "force-systemd-env-411000"
	I0307 10:10:02.901492    4062 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:10:02.901761    4062 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:10:02.907560    4062 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0307 10:10:02.955337    4062 start.go:159] libmachine.API.Create for "force-systemd-env-411000" (driver="qemu2")
	I0307 10:10:02.955384    4062 client.go:168] LocalClient.Create starting
	I0307 10:10:02.955510    4062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:10:02.955588    4062 main.go:141] libmachine: Decoding PEM data...
	I0307 10:10:02.955602    4062 main.go:141] libmachine: Parsing certificate...
	I0307 10:10:02.955663    4062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:10:02.955704    4062 main.go:141] libmachine: Decoding PEM data...
	I0307 10:10:02.955714    4062 main.go:141] libmachine: Parsing certificate...
	I0307 10:10:02.956230    4062 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:10:03.107698    4062 main.go:141] libmachine: Creating SSH key...
	I0307 10:10:03.212324    4062 main.go:141] libmachine: Creating Disk image...
	I0307 10:10:03.212329    4062 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:10:03.212506    4062 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-env-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-env-411000/disk.qcow2
	I0307 10:10:03.224549    4062 main.go:141] libmachine: STDOUT: 
	I0307 10:10:03.224577    4062 main.go:141] libmachine: STDERR: 
	I0307 10:10:03.224625    4062 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-env-411000/disk.qcow2 +20000M
	I0307 10:10:03.234994    4062 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:10:03.235009    4062 main.go:141] libmachine: STDERR: 
	I0307 10:10:03.235017    4062 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-env-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-env-411000/disk.qcow2
	I0307 10:10:03.235022    4062 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:10:03.235065    4062 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-env-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-env-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-env-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:72:bd:83:06:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/force-systemd-env-411000/disk.qcow2
	I0307 10:10:03.236653    4062 main.go:141] libmachine: STDOUT: 
	I0307 10:10:03.236669    4062 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:10:03.236682    4062 client.go:171] duration metric: took 281.299625ms to LocalClient.Create
	I0307 10:10:05.238789    4062 start.go:128] duration metric: took 2.337073917s to createHost
	I0307 10:10:05.238842    4062 start.go:83] releasing machines lock for "force-systemd-env-411000", held for 2.337485791s
	W0307 10:10:05.239264    4062 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:10:05.250878    4062 out.go:177] 
	W0307 10:10:05.260925    4062 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:10:05.261022    4062 out.go:239] * 
	* 
	W0307 10:10:05.263780    4062 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:10:05.276776    4062 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-411000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-411000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-411000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (82.770459ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-411000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-411000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-411000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-03-07 10:10:05.377133 -0800 PST m=+2469.199690959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-411000 -n force-systemd-env-411000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-411000 -n force-systemd-env-411000: exit status 7 (35.664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-411000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-411000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-411000
--- FAIL: TestForceSystemdEnv (10.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (30.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-618000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-618000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-2dhg4" [86800dcd-1e71-4b49-a87c-0c279831fa20] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-2dhg4" [86800dcd-1e71-4b49-a87c-0c279831fa20] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004011792s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:32328
functional_test.go:1657: error fetching http://192.168.105.4:32328: Get "http://192.168.105.4:32328": dial tcp 192.168.105.4:32328: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32328: Get "http://192.168.105.4:32328": dial tcp 192.168.105.4:32328: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32328: Get "http://192.168.105.4:32328": dial tcp 192.168.105.4:32328: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32328: Get "http://192.168.105.4:32328": dial tcp 192.168.105.4:32328: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32328: Get "http://192.168.105.4:32328": dial tcp 192.168.105.4:32328: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32328: Get "http://192.168.105.4:32328": dial tcp 192.168.105.4:32328: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32328: Get "http://192.168.105.4:32328": dial tcp 192.168.105.4:32328: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:32328: Get "http://192.168.105.4:32328": dial tcp 192.168.105.4:32328: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-618000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-2dhg4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-618000/192.168.105.4
Start Time:       Thu, 07 Mar 2024 09:40:40 -0800
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://492468e2cac14d18da55fd77cb8226408e35b0f617fc6f048f7c246904a312ab
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 07 Mar 2024 09:40:54 -0800
Finished:     Thu, 07 Mar 2024 09:40:54 -0800
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lbxmz (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-lbxmz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  30s                default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-2dhg4 to functional-618000
Normal   Pulled     17s (x3 over 29s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    17s (x3 over 29s)  kubelet            Created container echoserver-arm
Normal   Started    16s (x3 over 29s)  kubelet            Started container echoserver-arm
Warning  BackOff    4s (x3 over 28s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-2dhg4_default(86800dcd-1e71-4b49-a87c-0c279831fa20)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-618000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-618000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.144.32
IPs:                      10.111.144.32
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32328/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-618000 -n functional-618000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-618000 ssh findmnt                                                                                        | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:40 PST | 07 Mar 24 09:40 PST |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-618000 ssh -- ls                                                                                          | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:40 PST | 07 Mar 24 09:40 PST |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-618000 ssh cat                                                                                            | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:40 PST | 07 Mar 24 09:40 PST |
	|           | /mount-9p/test-1709833258143293000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-618000 ssh stat                                                                                           | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST | 07 Mar 24 09:41 PST |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-618000 ssh stat                                                                                           | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST | 07 Mar 24 09:41 PST |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-618000 ssh sudo                                                                                           | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST | 07 Mar 24 09:41 PST |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-618000 ssh findmnt                                                                                        | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-618000                                                                                                 | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1771804843/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-618000 ssh findmnt                                                                                        | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST | 07 Mar 24 09:41 PST |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-618000 ssh -- ls                                                                                          | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST | 07 Mar 24 09:41 PST |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-618000 ssh sudo                                                                                           | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-618000                                                                                                 | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup872211090/001:/mount1    |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-618000                                                                                                 | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup872211090/001:/mount3    |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-618000                                                                                                 | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup872211090/001:/mount2    |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-618000 ssh findmnt                                                                                        | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-618000 ssh findmnt                                                                                        | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST | 07 Mar 24 09:41 PST |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-618000 ssh findmnt                                                                                        | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST |                     |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-618000 ssh findmnt                                                                                        | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST | 07 Mar 24 09:41 PST |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-618000 ssh findmnt                                                                                        | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST | 07 Mar 24 09:41 PST |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-618000 ssh findmnt                                                                                        | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST | 07 Mar 24 09:41 PST |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-618000                                                                                                 | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-618000                                                                                                 | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-618000 --dry-run                                                                                       | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-618000                                                                                                 | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-618000 | jenkins | v1.32.0 | 07 Mar 24 09:41 PST |                     |
	|           | -p functional-618000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 09:41:10
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 09:41:10.318624    2740 out.go:291] Setting OutFile to fd 1 ...
	I0307 09:41:10.318741    2740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:41:10.318744    2740 out.go:304] Setting ErrFile to fd 2...
	I0307 09:41:10.318747    2740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:41:10.318875    2740 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 09:41:10.320293    2740 out.go:298] Setting JSON to false
	I0307 09:41:10.338536    2740 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2442,"bootTime":1709830828,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 09:41:10.338622    2740 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 09:41:10.343070    2740 out.go:177] * [functional-618000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	I0307 09:41:10.349986    2740 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 09:41:10.354143    2740 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 09:41:10.350098    2740 notify.go:220] Checking for updates...
	I0307 09:41:10.360028    2740 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 09:41:10.363075    2740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 09:41:10.366121    2740 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 09:41:10.367347    2740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 09:41:10.370337    2740 config.go:182] Loaded profile config "functional-618000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 09:41:10.370574    2740 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 09:41:10.378877    2740 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0307 09:41:10.382040    2740 start.go:297] selected driver: qemu2
	I0307 09:41:10.382051    2740 start.go:901] validating driver "qemu2" against &{Name:functional-618000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-618000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 09:41:10.382126    2740 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 09:41:10.389096    2740 out.go:177] 
	W0307 09:41:10.393081    2740 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0307 09:41:10.397040    2740 out.go:177] 
	
	
	==> Docker <==
	Mar 07 17:40:59 functional-618000 dockerd[7280]: time="2024-03-07T17:40:59.600072804Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 17:40:59 functional-618000 dockerd[7280]: time="2024-03-07T17:40:59.600081220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 17:40:59 functional-618000 dockerd[7280]: time="2024-03-07T17:40:59.600109093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 17:40:59 functional-618000 cri-dockerd[7479]: time="2024-03-07T17:40:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/95e926f5a84c9cc2397ac81f028bce3fc6b822fe36c7650af44aa4bf09be5788/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 07 17:41:05 functional-618000 cri-dockerd[7479]: time="2024-03-07T17:41:05Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Mar 07 17:41:05 functional-618000 dockerd[7280]: time="2024-03-07T17:41:05.230158324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 17:41:05 functional-618000 dockerd[7280]: time="2024-03-07T17:41:05.230186739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 17:41:05 functional-618000 dockerd[7280]: time="2024-03-07T17:41:05.230192239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 17:41:05 functional-618000 dockerd[7280]: time="2024-03-07T17:41:05.230391225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 17:41:05 functional-618000 dockerd[7272]: time="2024-03-07T17:41:05.266516129Z" level=info msg="ignoring event" container=7c00d8d671d962c17f20c15cfe9a7b081d8847d6715e604056a28668e92b808b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 17:41:05 functional-618000 dockerd[7280]: time="2024-03-07T17:41:05.266744738Z" level=info msg="shim disconnected" id=7c00d8d671d962c17f20c15cfe9a7b081d8847d6715e604056a28668e92b808b namespace=moby
	Mar 07 17:41:05 functional-618000 dockerd[7280]: time="2024-03-07T17:41:05.266823357Z" level=warning msg="cleaning up after shim disconnected" id=7c00d8d671d962c17f20c15cfe9a7b081d8847d6715e604056a28668e92b808b namespace=moby
	Mar 07 17:41:05 functional-618000 dockerd[7280]: time="2024-03-07T17:41:05.266833190Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 07 17:41:05 functional-618000 dockerd[7280]: time="2024-03-07T17:41:05.993703428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 17:41:05 functional-618000 dockerd[7280]: time="2024-03-07T17:41:05.993741883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 17:41:05 functional-618000 dockerd[7280]: time="2024-03-07T17:41:05.993747633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 17:41:05 functional-618000 dockerd[7280]: time="2024-03-07T17:41:05.993775089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 17:41:06 functional-618000 dockerd[7272]: time="2024-03-07T17:41:06.016121551Z" level=info msg="ignoring event" container=0a695670d7fd7dd375f14a67ccc3931d00db60af9d1bcdb83cdfd26886c2a6a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 17:41:06 functional-618000 dockerd[7280]: time="2024-03-07T17:41:06.016318162Z" level=info msg="shim disconnected" id=0a695670d7fd7dd375f14a67ccc3931d00db60af9d1bcdb83cdfd26886c2a6a7 namespace=moby
	Mar 07 17:41:06 functional-618000 dockerd[7280]: time="2024-03-07T17:41:06.016346577Z" level=warning msg="cleaning up after shim disconnected" id=0a695670d7fd7dd375f14a67ccc3931d00db60af9d1bcdb83cdfd26886c2a6a7 namespace=moby
	Mar 07 17:41:06 functional-618000 dockerd[7280]: time="2024-03-07T17:41:06.016350827Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 07 17:41:06 functional-618000 dockerd[7272]: time="2024-03-07T17:41:06.551052987Z" level=info msg="ignoring event" container=95e926f5a84c9cc2397ac81f028bce3fc6b822fe36c7650af44aa4bf09be5788 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 17:41:06 functional-618000 dockerd[7280]: time="2024-03-07T17:41:06.551129607Z" level=info msg="shim disconnected" id=95e926f5a84c9cc2397ac81f028bce3fc6b822fe36c7650af44aa4bf09be5788 namespace=moby
	Mar 07 17:41:06 functional-618000 dockerd[7280]: time="2024-03-07T17:41:06.551156272Z" level=warning msg="cleaning up after shim disconnected" id=95e926f5a84c9cc2397ac81f028bce3fc6b822fe36c7650af44aa4bf09be5788 namespace=moby
	Mar 07 17:41:06 functional-618000 dockerd[7280]: time="2024-03-07T17:41:06.551161771Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0a695670d7fd7       72565bf5bbedf                                                                                         5 seconds ago        Exited              echoserver-arm            3                   c0fe0b6b156c5       hello-node-759d89bdcc-v57k9
	7c00d8d671d96       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 seconds ago        Exited              mount-munger              0                   95e926f5a84c9       busybox-mount
	492468e2cac14       72565bf5bbedf                                                                                         17 seconds ago       Exited              echoserver-arm            2                   6f2ddeb05fcc0       hello-node-connect-7799dfb7c6-2dhg4
	7c2e799c3fd6b       nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107                         20 seconds ago       Running             myfrontend                0                   153a903af55c2       sp-pod
	e3bf4c9ccecd2       nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                         36 seconds ago       Running             nginx                     0                   3f992ec0cf285       nginx-svc
	f2ef71782582e       97e04611ad434                                                                                         About a minute ago   Running             coredns                   2                   36d5b9da24554       coredns-5dd5756b68-k7vhm
	18c008abbed8b       3ca3ca488cf13                                                                                         About a minute ago   Running             kube-proxy                2                   2e2ddf7340499       kube-proxy-dtccl
	667a370ec4894       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   ed432c4b129da       storage-provisioner
	3ce2162db3e19       9cdd6470f48c8                                                                                         About a minute ago   Running             etcd                      2                   a39da693e5baa       etcd-functional-618000
	be051c4690ffa       04b4c447bb9d4                                                                                         About a minute ago   Running             kube-apiserver            0                   fd413200ce4d1       kube-apiserver-functional-618000
	d177a478f91f1       05c284c929889                                                                                         About a minute ago   Running             kube-scheduler            2                   de01123ba8679       kube-scheduler-functional-618000
	0fe2876ce9b5c       9961cbceaf234                                                                                         About a minute ago   Running             kube-controller-manager   2                   6fbd096ca1b52       kube-controller-manager-functional-618000
	a14b00989bd0d       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       1                   fbafdc0287c37       storage-provisioner
	ea4f71905d11c       97e04611ad434                                                                                         2 minutes ago        Exited              coredns                   1                   838cfa2088000       coredns-5dd5756b68-k7vhm
	68d419b524cff       3ca3ca488cf13                                                                                         2 minutes ago        Exited              kube-proxy                1                   aeecfb0c8a2c1       kube-proxy-dtccl
	4f108295908c4       05c284c929889                                                                                         2 minutes ago        Exited              kube-scheduler            1                   85da407b53537       kube-scheduler-functional-618000
	eb8ee8b0c8c2d       9961cbceaf234                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   f8460f9f7a2da       kube-controller-manager-functional-618000
	21a49ddb4af8b       9cdd6470f48c8                                                                                         2 minutes ago        Exited              etcd                      1                   bf7ad023d0c23       etcd-functional-618000
	
	
	==> coredns [ea4f71905d11] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33061 - 9219 "HINFO IN 6534021787342519736.9116763809054881262. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004137388s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f2ef71782582] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55503 - 59289 "HINFO IN 6378402581787656802.1821473510966994526. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004960305s
	[INFO] 10.244.0.1:22809 - 11056 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000089161s
	[INFO] 10.244.0.1:60359 - 34671 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000084244s
	[INFO] 10.244.0.1:21185 - 58724 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000026832s
	[INFO] 10.244.0.1:64905 - 63623 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.002317964s
	[INFO] 10.244.0.1:6593 - 39780 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000073453s
	[INFO] 10.244.0.1:35246 - 34615 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.00010695s
	
	
	==> describe nodes <==
	Name:               functional-618000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-618000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c2be9b5c96c7962d271300acacf405d2402b272f
	                    minikube.k8s.io/name=functional-618000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T09_38_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 17:38:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-618000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 17:41:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 17:40:53 +0000   Thu, 07 Mar 2024 17:38:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 17:40:53 +0000   Thu, 07 Mar 2024 17:38:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 17:40:53 +0000   Thu, 07 Mar 2024 17:38:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 17:40:53 +0000   Thu, 07 Mar 2024 17:38:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-618000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	System Info:
	  Machine ID:                 a72e4d361c2c4660afc3d8673a759017
	  System UUID:                a72e4d361c2c4660afc3d8673a759017
	  Boot ID:                    5a64bb5d-6617-43b3-9762-eb4d6ac23040
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-v57k9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  default                     hello-node-connect-7799dfb7c6-2dhg4          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  kube-system                 coredns-5dd5756b68-k7vhm                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m29s
	  kube-system                 etcd-functional-618000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m43s
	  kube-system                 kube-apiserver-functional-618000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-controller-manager-functional-618000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 kube-proxy-dtccl                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  kube-system                 kube-scheduler-functional-618000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m28s                  kube-proxy       
	  Normal  Starting                 78s                    kube-proxy       
	  Normal  Starting                 2m8s                   kube-proxy       
	  Normal  Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m47s (x8 over 2m47s)  kubelet          Node functional-618000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m47s (x8 over 2m47s)  kubelet          Node functional-618000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m47s (x7 over 2m47s)  kubelet          Node functional-618000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m42s                  kubelet          Node functional-618000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m42s                  kubelet          Node functional-618000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m42s                  kubelet          Node functional-618000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m42s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m38s                  kubelet          Node functional-618000 status is now: NodeReady
	  Normal  RegisteredNode           2m30s                  node-controller  Node functional-618000 event: Registered Node functional-618000 in Controller
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node functional-618000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node functional-618000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m11s (x7 over 2m11s)  kubelet          Node functional-618000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           116s                   node-controller  Node functional-618000 event: Registered Node functional-618000 in Controller
	  Normal  Starting                 82s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  81s (x8 over 82s)      kubelet          Node functional-618000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s (x8 over 82s)      kubelet          Node functional-618000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s (x7 over 82s)      kubelet          Node functional-618000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  81s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           67s                    node-controller  Node functional-618000 event: Registered Node functional-618000 in Controller
	
	
	==> dmesg <==
	[Mar 7 17:39] kauditd_printk_skb: 202 callbacks suppressed
	[ +11.920413] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.881511] systemd-fstab-generator[6142]: Ignoring "noauto" option for root device
	[ +18.036328] systemd-fstab-generator[6800]: Ignoring "noauto" option for root device
	[  +0.056163] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.091341] systemd-fstab-generator[6834]: Ignoring "noauto" option for root device
	[  +0.090213] systemd-fstab-generator[6846]: Ignoring "noauto" option for root device
	[  +0.104872] systemd-fstab-generator[6860]: Ignoring "noauto" option for root device
	[  +5.094468] kauditd_printk_skb: 89 callbacks suppressed
	[  +6.285736] systemd-fstab-generator[7432]: Ignoring "noauto" option for root device
	[  +0.094787] systemd-fstab-generator[7444]: Ignoring "noauto" option for root device
	[  +0.079866] systemd-fstab-generator[7456]: Ignoring "noauto" option for root device
	[  +0.080495] systemd-fstab-generator[7471]: Ignoring "noauto" option for root device
	[  +0.209484] systemd-fstab-generator[7620]: Ignoring "noauto" option for root device
	[  +1.070454] systemd-fstab-generator[7740]: Ignoring "noauto" option for root device
	[  +3.459960] kauditd_printk_skb: 202 callbacks suppressed
	[Mar 7 17:40] kauditd_printk_skb: 29 callbacks suppressed
	[  +1.555177] systemd-fstab-generator[8983]: Ignoring "noauto" option for root device
	[  +4.695777] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.332152] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.863359] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.766342] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.064894] kauditd_printk_skb: 13 callbacks suppressed
	[ +10.826710] kauditd_printk_skb: 39 callbacks suppressed
	[ +10.434364] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [21a49ddb4af8] <==
	{"level":"info","ts":"2024-03-07T17:39:00.064871Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-07T17:39:01.5453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-07T17:39:01.545401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-07T17:39:01.545467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-03-07T17:39:01.54549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-03-07T17:39:01.545499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-03-07T17:39:01.545515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-03-07T17:39:01.545527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-03-07T17:39:01.551227Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-618000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-07T17:39:01.551246Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T17:39:01.551271Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T17:39:01.553402Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-03-07T17:39:01.553576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-07T17:39:01.551526Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-07T17:39:01.553751Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-07T17:39:36.020439Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-07T17:39:36.020498Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-618000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-03-07T17:39:36.020537Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-07T17:39:36.020575Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-07T17:39:36.029443Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-07T17:39:36.029469Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-07T17:39:36.030717Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-03-07T17:39:36.031793Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-07T17:39:36.031821Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-07T17:39:36.031825Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-618000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [3ce2162db3e1] <==
	{"level":"info","ts":"2024-03-07T17:39:49.708533Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-07T17:39:49.708543Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-07T17:39:49.708769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-03-07T17:39:49.708885Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-03-07T17:39:49.709036Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T17:39:49.709116Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T17:39:49.713768Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-07T17:39:49.719799Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-07T17:39:49.719824Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-07T17:39:49.719688Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-07T17:39:49.719848Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-07T17:39:51.001849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-03-07T17:39:51.002022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-03-07T17:39:51.002071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-03-07T17:39:51.002102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-03-07T17:39:51.002117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-03-07T17:39:51.002145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-03-07T17:39:51.002236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-03-07T17:39:51.007576Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-618000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-07T17:39:51.007578Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T17:39:51.008437Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T17:39:51.010427Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-03-07T17:39:51.0112Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-07T17:39:51.011629Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-07T17:39:51.011664Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 17:41:11 up 3 min,  0 users,  load average: 0.48, 0.29, 0.12
	Linux functional-618000 5.10.207 #1 SMP PREEMPT Thu Feb 22 23:40:42 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [be051c4690ff] <==
	I0307 17:39:51.697562       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0307 17:39:51.697564       1 cache.go:39] Caches are synced for autoregister controller
	I0307 17:39:51.734687       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0307 17:39:52.591819       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0307 17:39:52.695002       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0307 17:39:52.695480       1 controller.go:624] quota admission added evaluator for: endpoints
	I0307 17:39:52.696729       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0307 17:39:53.032287       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0307 17:39:53.035867       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0307 17:39:53.047196       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0307 17:39:53.055045       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0307 17:39:53.057216       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	E0307 17:40:01.690365       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	I0307 17:40:10.330680       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.1.66"}
	E0307 17:40:11.691382       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	I0307 17:40:16.616177       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0307 17:40:16.659687       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.109.229"}
	E0307 17:40:21.691811       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	I0307 17:40:31.281337       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.163.233"}
	E0307 17:40:31.692054       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	I0307 17:40:40.730233       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.144.32"}
	E0307 17:40:41.692529       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0307 17:40:51.693071       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0307 17:41:01.693643       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	I0307 17:41:11.029651       1 controller.go:624] quota admission added evaluator for: namespaces
	
	
	==> kube-controller-manager [0fe2876ce9b5] <==
	I0307 17:40:42.349975       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="25.373µs"
	I0307 17:40:51.977561       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="31.623µs"
	I0307 17:40:53.980778       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="40.664µs"
	I0307 17:40:54.427089       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="29.165µs"
	I0307 17:41:06.493057       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="26.206µs"
	I0307 17:41:06.983166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="23.665µs"
	I0307 17:41:11.077111       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-7fd5cb4ddc to 1"
	I0307 17:41:11.095122       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I0307 17:41:11.095463       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0307 17:41:11.110447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="33.659729ms"
	E0307 17:41:11.110459       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0307 17:41:11.111468       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0307 17:41:11.123302       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="28.411532ms"
	E0307 17:41:11.123389       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0307 17:41:11.128581       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.147996ms"
	E0307 17:41:11.128603       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0307 17:41:11.128708       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0307 17:41:11.130291       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="19.820165ms"
	E0307 17:41:11.130359       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0307 17:41:11.130350       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0307 17:41:11.132049       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="3.433664ms"
	E0307 17:41:11.132110       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0307 17:41:11.132115       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0307 17:41:11.150804       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-q6psd"
	I0307 17:41:11.157584       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7fd5cb4ddc-m24zq"
	
	
	==> kube-controller-manager [eb8ee8b0c8c2] <==
	I0307 17:39:14.682497       1 shared_informer.go:318] Caches are synced for TTL
	I0307 17:39:14.684093       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0307 17:39:14.684100       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0307 17:39:14.685385       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0307 17:39:14.686399       1 shared_informer.go:318] Caches are synced for GC
	I0307 17:39:14.687533       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0307 17:39:14.687609       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.234µs"
	I0307 17:39:14.691860       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0307 17:39:14.700276       1 shared_informer.go:318] Caches are synced for deployment
	I0307 17:39:14.700280       1 shared_informer.go:318] Caches are synced for job
	I0307 17:39:14.716180       1 shared_informer.go:318] Caches are synced for daemon sets
	I0307 17:39:14.717147       1 shared_informer.go:318] Caches are synced for HPA
	I0307 17:39:14.771124       1 shared_informer.go:318] Caches are synced for PV protection
	I0307 17:39:14.775282       1 shared_informer.go:318] Caches are synced for persistent volume
	I0307 17:39:14.776490       1 shared_informer.go:318] Caches are synced for attach detach
	I0307 17:39:14.780913       1 shared_informer.go:318] Caches are synced for ephemeral
	I0307 17:39:14.782991       1 shared_informer.go:318] Caches are synced for resource quota
	I0307 17:39:14.785170       1 shared_informer.go:318] Caches are synced for stateful set
	I0307 17:39:14.789699       1 shared_informer.go:318] Caches are synced for expand
	I0307 17:39:14.798864       1 shared_informer.go:318] Caches are synced for resource quota
	I0307 17:39:14.816609       1 shared_informer.go:318] Caches are synced for PVC protection
	I0307 17:39:14.881397       1 shared_informer.go:318] Caches are synced for disruption
	I0307 17:39:15.205023       1 shared_informer.go:318] Caches are synced for garbage collector
	I0307 17:39:15.279556       1 shared_informer.go:318] Caches are synced for garbage collector
	I0307 17:39:15.279582       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-proxy [18c008abbed8] <==
	I0307 17:39:52.463785       1 server_others.go:69] "Using iptables proxy"
	I0307 17:39:52.469576       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0307 17:39:52.477721       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0307 17:39:52.477733       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0307 17:39:52.478298       1 server_others.go:152] "Using iptables Proxier"
	I0307 17:39:52.478317       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0307 17:39:52.478383       1 server.go:846] "Version info" version="v1.28.4"
	I0307 17:39:52.478391       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 17:39:52.478682       1 config.go:188] "Starting service config controller"
	I0307 17:39:52.478692       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0307 17:39:52.478699       1 config.go:97] "Starting endpoint slice config controller"
	I0307 17:39:52.478710       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0307 17:39:52.478930       1 config.go:315] "Starting node config controller"
	I0307 17:39:52.478933       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0307 17:39:52.580114       1 shared_informer.go:318] Caches are synced for node config
	I0307 17:39:52.580114       1 shared_informer.go:318] Caches are synced for service config
	I0307 17:39:52.580136       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [68d419b524cf] <==
	I0307 17:39:02.914744       1 server_others.go:69] "Using iptables proxy"
	I0307 17:39:02.921555       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0307 17:39:02.934327       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0307 17:39:02.934343       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0307 17:39:02.934996       1 server_others.go:152] "Using iptables Proxier"
	I0307 17:39:02.935044       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0307 17:39:02.935138       1 server.go:846] "Version info" version="v1.28.4"
	I0307 17:39:02.935146       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 17:39:02.935490       1 config.go:188] "Starting service config controller"
	I0307 17:39:02.935499       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0307 17:39:02.935537       1 config.go:97] "Starting endpoint slice config controller"
	I0307 17:39:02.935542       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0307 17:39:02.935757       1 config.go:315] "Starting node config controller"
	I0307 17:39:02.935759       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0307 17:39:03.035640       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0307 17:39:03.035640       1 shared_informer.go:318] Caches are synced for service config
	I0307 17:39:03.035779       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4f108295908c] <==
	I0307 17:39:00.409611       1 serving.go:348] Generated self-signed cert in-memory
	W0307 17:39:02.160902       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0307 17:39:02.160916       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0307 17:39:02.160921       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0307 17:39:02.160924       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0307 17:39:02.188685       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0307 17:39:02.188700       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 17:39:02.189585       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0307 17:39:02.189644       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0307 17:39:02.189652       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 17:39:02.189659       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0307 17:39:02.290543       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 17:39:36.012307       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0307 17:39:36.012327       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0307 17:39:36.012401       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d177a478f91f] <==
	W0307 17:39:51.660957       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0307 17:39:51.660990       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0307 17:39:51.661021       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0307 17:39:51.661057       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0307 17:39:51.661090       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0307 17:39:51.661106       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0307 17:39:51.661145       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0307 17:39:51.661172       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0307 17:39:51.661201       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0307 17:39:51.661231       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0307 17:39:51.661262       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0307 17:39:51.661278       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0307 17:39:51.661319       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0307 17:39:51.661342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0307 17:39:51.661438       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0307 17:39:51.661470       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0307 17:39:51.661500       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0307 17:39:51.661527       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0307 17:39:51.661553       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0307 17:39:51.661573       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0307 17:39:51.661611       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0307 17:39:51.661631       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0307 17:39:51.663459       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0307 17:39:51.663470       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0307 17:39:52.750071       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 07 17:40:54 functional-618000 kubelet[7747]: E0307 17:40:54.420620    7747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-2dhg4_default(86800dcd-1e71-4b49-a87c-0c279831fa20)\"" pod="default/hello-node-connect-7799dfb7c6-2dhg4" podUID="86800dcd-1e71-4b49-a87c-0c279831fa20"
	Mar 07 17:40:59 functional-618000 kubelet[7747]: I0307 17:40:59.252493    7747 topology_manager.go:215] "Topology Admit Handler" podUID="2bc2637c-3594-4bde-bf46-63e52d60a520" podNamespace="default" podName="busybox-mount"
	Mar 07 17:40:59 functional-618000 kubelet[7747]: I0307 17:40:59.353372    7747 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/2bc2637c-3594-4bde-bf46-63e52d60a520-test-volume\") pod \"busybox-mount\" (UID: \"2bc2637c-3594-4bde-bf46-63e52d60a520\") " pod="default/busybox-mount"
	Mar 07 17:40:59 functional-618000 kubelet[7747]: I0307 17:40:59.353394    7747 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsmr9\" (UniqueName: \"kubernetes.io/projected/2bc2637c-3594-4bde-bf46-63e52d60a520-kube-api-access-xsmr9\") pod \"busybox-mount\" (UID: \"2bc2637c-3594-4bde-bf46-63e52d60a520\") " pod="default/busybox-mount"
	Mar 07 17:41:05 functional-618000 kubelet[7747]: I0307 17:41:05.972931    7747 scope.go:117] "RemoveContainer" containerID="b41ce1121a12709ef3b6b82302067536ef76472da549e3723df152ad4fe10570"
	Mar 07 17:41:06 functional-618000 kubelet[7747]: I0307 17:41:06.481069    7747 scope.go:117] "RemoveContainer" containerID="b41ce1121a12709ef3b6b82302067536ef76472da549e3723df152ad4fe10570"
	Mar 07 17:41:06 functional-618000 kubelet[7747]: I0307 17:41:06.481200    7747 scope.go:117] "RemoveContainer" containerID="0a695670d7fd7dd375f14a67ccc3931d00db60af9d1bcdb83cdfd26886c2a6a7"
	Mar 07 17:41:06 functional-618000 kubelet[7747]: E0307 17:41:06.481276    7747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-v57k9_default(7bfb00f1-b699-4703-bf7d-7dd17706ba55)\"" pod="default/hello-node-759d89bdcc-v57k9" podUID="7bfb00f1-b699-4703-bf7d-7dd17706ba55"
	Mar 07 17:41:06 functional-618000 kubelet[7747]: I0307 17:41:06.600686    7747 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsmr9\" (UniqueName: \"kubernetes.io/projected/2bc2637c-3594-4bde-bf46-63e52d60a520-kube-api-access-xsmr9\") pod \"2bc2637c-3594-4bde-bf46-63e52d60a520\" (UID: \"2bc2637c-3594-4bde-bf46-63e52d60a520\") "
	Mar 07 17:41:06 functional-618000 kubelet[7747]: I0307 17:41:06.600711    7747 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/2bc2637c-3594-4bde-bf46-63e52d60a520-test-volume\") pod \"2bc2637c-3594-4bde-bf46-63e52d60a520\" (UID: \"2bc2637c-3594-4bde-bf46-63e52d60a520\") "
	Mar 07 17:41:06 functional-618000 kubelet[7747]: I0307 17:41:06.600864    7747 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bc2637c-3594-4bde-bf46-63e52d60a520-test-volume" (OuterVolumeSpecName: "test-volume") pod "2bc2637c-3594-4bde-bf46-63e52d60a520" (UID: "2bc2637c-3594-4bde-bf46-63e52d60a520"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 07 17:41:06 functional-618000 kubelet[7747]: I0307 17:41:06.603396    7747 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bc2637c-3594-4bde-bf46-63e52d60a520-kube-api-access-xsmr9" (OuterVolumeSpecName: "kube-api-access-xsmr9") pod "2bc2637c-3594-4bde-bf46-63e52d60a520" (UID: "2bc2637c-3594-4bde-bf46-63e52d60a520"). InnerVolumeSpecName "kube-api-access-xsmr9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 07 17:41:06 functional-618000 kubelet[7747]: I0307 17:41:06.701695    7747 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xsmr9\" (UniqueName: \"kubernetes.io/projected/2bc2637c-3594-4bde-bf46-63e52d60a520-kube-api-access-xsmr9\") on node \"functional-618000\" DevicePath \"\""
	Mar 07 17:41:06 functional-618000 kubelet[7747]: I0307 17:41:06.701712    7747 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/2bc2637c-3594-4bde-bf46-63e52d60a520-test-volume\") on node \"functional-618000\" DevicePath \"\""
	Mar 07 17:41:06 functional-618000 kubelet[7747]: I0307 17:41:06.972220    7747 scope.go:117] "RemoveContainer" containerID="492468e2cac14d18da55fd77cb8226408e35b0f617fc6f048f7c246904a312ab"
	Mar 07 17:41:06 functional-618000 kubelet[7747]: E0307 17:41:06.972814    7747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-2dhg4_default(86800dcd-1e71-4b49-a87c-0c279831fa20)\"" pod="default/hello-node-connect-7799dfb7c6-2dhg4" podUID="86800dcd-1e71-4b49-a87c-0c279831fa20"
	Mar 07 17:41:07 functional-618000 kubelet[7747]: I0307 17:41:07.489931    7747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95e926f5a84c9cc2397ac81f028bce3fc6b822fe36c7650af44aa4bf09be5788"
	Mar 07 17:41:11 functional-618000 kubelet[7747]: I0307 17:41:11.157986    7747 topology_manager.go:215] "Topology Admit Handler" podUID="f7e3eaae-4792-458a-9a0c-59301cfdd3bc" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-q6psd"
	Mar 07 17:41:11 functional-618000 kubelet[7747]: E0307 17:41:11.158024    7747 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2bc2637c-3594-4bde-bf46-63e52d60a520" containerName="mount-munger"
	Mar 07 17:41:11 functional-618000 kubelet[7747]: I0307 17:41:11.158042    7747 memory_manager.go:346] "RemoveStaleState removing state" podUID="2bc2637c-3594-4bde-bf46-63e52d60a520" containerName="mount-munger"
	Mar 07 17:41:11 functional-618000 kubelet[7747]: I0307 17:41:11.176354    7747 topology_manager.go:215] "Topology Admit Handler" podUID="ce24275d-c2fa-4dcf-b9d0-cef0a034ddeb" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-7fd5cb4ddc-m24zq"
	Mar 07 17:41:11 functional-618000 kubelet[7747]: I0307 17:41:11.237679    7747 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ce24275d-c2fa-4dcf-b9d0-cef0a034ddeb-tmp-volume\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-m24zq\" (UID: \"ce24275d-c2fa-4dcf-b9d0-cef0a034ddeb\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-m24zq"
	Mar 07 17:41:11 functional-618000 kubelet[7747]: I0307 17:41:11.237705    7747 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pwtm\" (UniqueName: \"kubernetes.io/projected/ce24275d-c2fa-4dcf-b9d0-cef0a034ddeb-kube-api-access-7pwtm\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-m24zq\" (UID: \"ce24275d-c2fa-4dcf-b9d0-cef0a034ddeb\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-m24zq"
	Mar 07 17:41:11 functional-618000 kubelet[7747]: I0307 17:41:11.237717    7747 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6swws\" (UniqueName: \"kubernetes.io/projected/f7e3eaae-4792-458a-9a0c-59301cfdd3bc-kube-api-access-6swws\") pod \"kubernetes-dashboard-8694d4445c-q6psd\" (UID: \"f7e3eaae-4792-458a-9a0c-59301cfdd3bc\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-q6psd"
	Mar 07 17:41:11 functional-618000 kubelet[7747]: I0307 17:41:11.237727    7747 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f7e3eaae-4792-458a-9a0c-59301cfdd3bc-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-q6psd\" (UID: \"f7e3eaae-4792-458a-9a0c-59301cfdd3bc\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-q6psd"
	
	
	==> storage-provisioner [667a370ec489] <==
	I0307 17:39:52.415051       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0307 17:39:52.430930       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0307 17:39:52.432543       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0307 17:40:09.817943       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0307 17:40:09.818688       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-618000_2e38b6db-6ea9-4729-bd1f-88bc8b37a392!
	I0307 17:40:09.819101       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"295dcb9f-7754-49da-a383-13b5a3f853d0", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-618000_2e38b6db-6ea9-4729-bd1f-88bc8b37a392 became leader
	I0307 17:40:09.919281       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-618000_2e38b6db-6ea9-4729-bd1f-88bc8b37a392!
	I0307 17:40:37.902508       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0307 17:40:37.902532       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    22446fab-f94e-4e6a-9bc7-eb3a2e1b0dff 386 0 2024-03-07 17:38:42 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-03-07 17:38:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-38650dd8-b721-459e-ad37-e26f4ea41b7b &PersistentVolumeClaim{ObjectMeta:{myclaim  default  38650dd8-b721-459e-ad37-e26f4ea41b7b 736 0 2024-03-07 17:40:37 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-03-07 17:40:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-03-07 17:40:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0307 17:40:37.902792       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-38650dd8-b721-459e-ad37-e26f4ea41b7b" provisioned
	I0307 17:40:37.902802       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0307 17:40:37.902805       1 volume_store.go:212] Trying to save persistentvolume "pvc-38650dd8-b721-459e-ad37-e26f4ea41b7b"
	I0307 17:40:37.903240       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"38650dd8-b721-459e-ad37-e26f4ea41b7b", APIVersion:"v1", ResourceVersion:"736", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0307 17:40:37.907558       1 volume_store.go:219] persistentvolume "pvc-38650dd8-b721-459e-ad37-e26f4ea41b7b" saved
	I0307 17:40:37.907606       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"38650dd8-b721-459e-ad37-e26f4ea41b7b", APIVersion:"v1", ResourceVersion:"736", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-38650dd8-b721-459e-ad37-e26f4ea41b7b
	
	
	==> storage-provisioner [a14b00989bd0] <==
	I0307 17:39:02.923498       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0307 17:39:02.927828       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0307 17:39:02.927850       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0307 17:39:20.313228       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0307 17:39:20.313291       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-618000_7a29d6a7-1da8-4572-92b3-5d47dd3e1646!
	I0307 17:39:20.313432       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"295dcb9f-7754-49da-a383-13b5a3f853d0", APIVersion:"v1", ResourceVersion:"521", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-618000_7a29d6a7-1da8-4572-92b3-5d47dd3e1646 became leader
	I0307 17:39:20.414116       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-618000_7a29d6a7-1da8-4572-92b3-5d47dd3e1646!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-618000 -n functional-618000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-618000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-m24zq kubernetes-dashboard-8694d4445c-q6psd
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-618000 describe pod busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-m24zq kubernetes-dashboard-8694d4445c-q6psd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-618000 describe pod busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-m24zq kubernetes-dashboard-8694d4445c-q6psd: exit status 1 (45.201916ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-618000/192.168.105.4
	Start Time:       Thu, 07 Mar 2024 09:40:59 -0800
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://7c00d8d671d962c17f20c15cfe9a7b081d8847d6715e604056a28668e92b808b
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 07 Mar 2024 09:41:05 -0800
	      Finished:     Thu, 07 Mar 2024 09:41:05 -0800
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xsmr9 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-xsmr9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  12s   default-scheduler  Successfully assigned default/busybox-mount to functional-618000
	  Normal  Pulling    12s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 5.536s (5.536s including waiting)
	  Normal  Created    6s    kubelet            Created container mount-munger
	  Normal  Started    6s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-7fd5cb4ddc-m24zq" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-q6psd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-618000 describe pod busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-m24zq kubernetes-dashboard-8694d4445c-q6psd: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (30.97s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (214.15s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-951000 node stop m02 -v=7 --alsologtostderr: (12.210282416s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 status -v=7 --alsologtostderr
E0307 09:48:00.385980    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
E0307 09:49:13.846430    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
E0307 09:50:16.518114    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-951000 status -v=7 --alsologtostderr: exit status 7 (2m55.975754916s)

                                                
                                                
-- stdout --
	ha-951000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-951000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-951000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-951000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 09:47:39.729905    3071 out.go:291] Setting OutFile to fd 1 ...
	I0307 09:47:39.730607    3071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:47:39.730615    3071 out.go:304] Setting ErrFile to fd 2...
	I0307 09:47:39.730617    3071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:47:39.730750    3071 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 09:47:39.730866    3071 out.go:298] Setting JSON to false
	I0307 09:47:39.730882    3071 mustload.go:65] Loading cluster: ha-951000
	I0307 09:47:39.730950    3071 notify.go:220] Checking for updates...
	I0307 09:47:39.731130    3071 config.go:182] Loaded profile config "ha-951000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 09:47:39.731139    3071 status.go:255] checking status of ha-951000 ...
	I0307 09:47:39.732167    3071 status.go:330] ha-951000 host status = "Running" (err=<nil>)
	I0307 09:47:39.732189    3071 host.go:66] Checking if "ha-951000" exists ...
	I0307 09:47:39.732303    3071 host.go:66] Checking if "ha-951000" exists ...
	I0307 09:47:39.732461    3071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 09:47:39.732470    3071 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000/id_rsa Username:docker}
	W0307 09:48:05.654831    3071 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0307 09:48:05.654958    3071 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0307 09:48:05.654979    3071 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0307 09:48:05.654989    3071 status.go:257] ha-951000 status: &{Name:ha-951000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 09:48:05.655010    3071 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0307 09:48:05.655021    3071 status.go:255] checking status of ha-951000-m02 ...
	I0307 09:48:05.655415    3071 status.go:330] ha-951000-m02 host status = "Stopped" (err=<nil>)
	I0307 09:48:05.655426    3071 status.go:343] host is not running, skipping remaining checks
	I0307 09:48:05.655431    3071 status.go:257] ha-951000-m02 status: &{Name:ha-951000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 09:48:05.655443    3071 status.go:255] checking status of ha-951000-m03 ...
	I0307 09:48:05.656821    3071 status.go:330] ha-951000-m03 host status = "Running" (err=<nil>)
	I0307 09:48:05.656835    3071 host.go:66] Checking if "ha-951000-m03" exists ...
	I0307 09:48:05.657120    3071 host.go:66] Checking if "ha-951000-m03" exists ...
	I0307 09:48:05.657353    3071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 09:48:05.657367    3071 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m03/id_rsa Username:docker}
	W0307 09:49:20.655694    3071 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0307 09:49:20.655745    3071 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0307 09:49:20.655752    3071 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0307 09:49:20.655755    3071 status.go:257] ha-951000-m03 status: &{Name:ha-951000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 09:49:20.655764    3071 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0307 09:49:20.655767    3071 status.go:255] checking status of ha-951000-m04 ...
	I0307 09:49:20.656531    3071 status.go:330] ha-951000-m04 host status = "Running" (err=<nil>)
	I0307 09:49:20.656539    3071 host.go:66] Checking if "ha-951000-m04" exists ...
	I0307 09:49:20.656645    3071 host.go:66] Checking if "ha-951000-m04" exists ...
	I0307 09:49:20.656772    3071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 09:49:20.656778    3071 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m04/id_rsa Username:docker}
	W0307 09:50:35.656372    3071 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0307 09:50:35.656435    3071 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0307 09:50:35.656445    3071 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0307 09:50:35.656449    3071 status.go:257] ha-951000-m04 status: &{Name:ha-951000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0307 09:50:35.656459    3071 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-951000 status -v=7 --alsologtostderr": ha-951000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-951000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-951000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-951000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-951000 status -v=7 --alsologtostderr": ha-951000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-951000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-951000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-951000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-951000 status -v=7 --alsologtostderr": ha-951000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-951000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-951000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-951000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000
E0307 09:50:44.222767    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000: exit status 3 (25.959746042s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 09:51:01.615734    3104 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0307 09:51:01.615745    3104 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-951000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMutliControlPlane/serial/StopSecondaryNode (214.15s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.68s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.709169s)
ha_test.go:413: expected profile "ha-951000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-951000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-951000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-951000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"
\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000: exit status 3 (25.970213s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 09:52:46.290281    3124 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0307 09:52:46.290300    3124 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-951000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.68s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (208.9s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-951000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.123936166s)

                                                
                                                
-- stdout --
	* Starting "ha-951000-m02" control-plane node in "ha-951000" cluster
	* Restarting existing qemu2 VM for "ha-951000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-951000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 09:52:46.336035    3129 out.go:291] Setting OutFile to fd 1 ...
	I0307 09:52:46.336354    3129 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:52:46.336358    3129 out.go:304] Setting ErrFile to fd 2...
	I0307 09:52:46.336361    3129 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:52:46.336492    3129 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 09:52:46.337375    3129 mustload.go:65] Loading cluster: ha-951000
	I0307 09:52:46.337631    3129 config.go:182] Loaded profile config "ha-951000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W0307 09:52:46.338307    3129 host.go:58] "ha-951000-m02" host status: Stopped
	I0307 09:52:46.341913    3129 out.go:177] * Starting "ha-951000-m02" control-plane node in "ha-951000" cluster
	I0307 09:52:46.345873    3129 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 09:52:46.346174    3129 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 09:52:46.346182    3129 cache.go:56] Caching tarball of preloaded images
	I0307 09:52:46.346268    3129 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 09:52:46.346273    3129 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 09:52:46.346327    3129 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/ha-951000/config.json ...
	I0307 09:52:46.347762    3129 start.go:360] acquireMachinesLock for ha-951000-m02: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 09:52:46.347857    3129 start.go:364] duration metric: took 80.791µs to acquireMachinesLock for "ha-951000-m02"
	I0307 09:52:46.347864    3129 start.go:96] Skipping create...Using existing machine configuration
	I0307 09:52:46.347870    3129 fix.go:54] fixHost starting: m02
	I0307 09:52:46.347992    3129 fix.go:112] recreateIfNeeded on ha-951000-m02: state=Stopped err=<nil>
	W0307 09:52:46.347997    3129 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 09:52:46.351874    3129 out.go:177] * Restarting existing qemu2 VM for "ha-951000-m02" ...
	I0307 09:52:46.354940    3129 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:8d:f2:25:93:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m02/disk.qcow2
	I0307 09:52:46.372309    3129 main.go:141] libmachine: STDOUT: 
	I0307 09:52:46.372331    3129 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 09:52:46.372361    3129 fix.go:56] duration metric: took 24.49025ms for fixHost
	I0307 09:52:46.372365    3129 start.go:83] releasing machines lock for "ha-951000-m02", held for 24.505ms
	W0307 09:52:46.372376    3129 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 09:52:46.372408    3129 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 09:52:46.372412    3129 start.go:728] Will try again in 5 seconds ...
	I0307 09:52:51.374422    3129 start.go:360] acquireMachinesLock for ha-951000-m02: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 09:52:51.374667    3129 start.go:364] duration metric: took 178.5µs to acquireMachinesLock for "ha-951000-m02"
	I0307 09:52:51.374740    3129 start.go:96] Skipping create...Using existing machine configuration
	I0307 09:52:51.374749    3129 fix.go:54] fixHost starting: m02
	I0307 09:52:51.375170    3129 fix.go:112] recreateIfNeeded on ha-951000-m02: state=Stopped err=<nil>
	W0307 09:52:51.375182    3129 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 09:52:51.379693    3129 out.go:177] * Restarting existing qemu2 VM for "ha-951000-m02" ...
	I0307 09:52:51.383788    3129 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:8d:f2:25:93:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m02/disk.qcow2
	I0307 09:52:51.388316    3129 main.go:141] libmachine: STDOUT: 
	I0307 09:52:51.388360    3129 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 09:52:51.388419    3129 fix.go:56] duration metric: took 13.670834ms for fixHost
	I0307 09:52:51.388427    3129 start.go:83] releasing machines lock for "ha-951000-m02", held for 13.748542ms
	W0307 09:52:51.388520    3129 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-951000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-951000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 09:52:51.392759    3129 out.go:177] 
	W0307 09:52:51.396743    3129 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 09:52:51.396756    3129 out.go:239] * 
	* 
	W0307 09:52:51.406769    3129 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 09:52:51.411777    3129 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0307 09:52:46.336035    3129 out.go:291] Setting OutFile to fd 1 ...
I0307 09:52:46.336354    3129 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 09:52:46.336358    3129 out.go:304] Setting ErrFile to fd 2...
I0307 09:52:46.336361    3129 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 09:52:46.336492    3129 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
I0307 09:52:46.337375    3129 mustload.go:65] Loading cluster: ha-951000
I0307 09:52:46.337631    3129 config.go:182] Loaded profile config "ha-951000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
W0307 09:52:46.338307    3129 host.go:58] "ha-951000-m02" host status: Stopped
I0307 09:52:46.341913    3129 out.go:177] * Starting "ha-951000-m02" control-plane node in "ha-951000" cluster
I0307 09:52:46.345873    3129 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0307 09:52:46.346174    3129 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
I0307 09:52:46.346182    3129 cache.go:56] Caching tarball of preloaded images
I0307 09:52:46.346268    3129 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0307 09:52:46.346273    3129 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0307 09:52:46.346327    3129 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/ha-951000/config.json ...
I0307 09:52:46.347762    3129 start.go:360] acquireMachinesLock for ha-951000-m02: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0307 09:52:46.347857    3129 start.go:364] duration metric: took 80.791µs to acquireMachinesLock for "ha-951000-m02"
I0307 09:52:46.347864    3129 start.go:96] Skipping create...Using existing machine configuration
I0307 09:52:46.347870    3129 fix.go:54] fixHost starting: m02
I0307 09:52:46.347992    3129 fix.go:112] recreateIfNeeded on ha-951000-m02: state=Stopped err=<nil>
W0307 09:52:46.347997    3129 fix.go:138] unexpected machine state, will restart: <nil>
I0307 09:52:46.351874    3129 out.go:177] * Restarting existing qemu2 VM for "ha-951000-m02" ...
I0307 09:52:46.354940    3129 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:8d:f2:25:93:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m02/disk.qcow2
I0307 09:52:46.372309    3129 main.go:141] libmachine: STDOUT: 
I0307 09:52:46.372331    3129 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0307 09:52:46.372361    3129 fix.go:56] duration metric: took 24.49025ms for fixHost
I0307 09:52:46.372365    3129 start.go:83] releasing machines lock for "ha-951000-m02", held for 24.505ms
W0307 09:52:46.372376    3129 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0307 09:52:46.372408    3129 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0307 09:52:46.372412    3129 start.go:728] Will try again in 5 seconds ...
I0307 09:52:51.374422    3129 start.go:360] acquireMachinesLock for ha-951000-m02: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0307 09:52:51.374667    3129 start.go:364] duration metric: took 178.5µs to acquireMachinesLock for "ha-951000-m02"
I0307 09:52:51.374740    3129 start.go:96] Skipping create...Using existing machine configuration
I0307 09:52:51.374749    3129 fix.go:54] fixHost starting: m02
I0307 09:52:51.375170    3129 fix.go:112] recreateIfNeeded on ha-951000-m02: state=Stopped err=<nil>
W0307 09:52:51.375182    3129 fix.go:138] unexpected machine state, will restart: <nil>
I0307 09:52:51.379693    3129 out.go:177] * Restarting existing qemu2 VM for "ha-951000-m02" ...
I0307 09:52:51.383788    3129 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:8d:f2:25:93:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m02/disk.qcow2
I0307 09:52:51.388316    3129 main.go:141] libmachine: STDOUT: 
I0307 09:52:51.388360    3129 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0307 09:52:51.388419    3129 fix.go:56] duration metric: took 13.670834ms for fixHost
I0307 09:52:51.388427    3129 start.go:83] releasing machines lock for "ha-951000-m02", held for 13.748542ms
W0307 09:52:51.388520    3129 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-951000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-951000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0307 09:52:51.392759    3129 out.go:177] 
W0307 09:52:51.396743    3129 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0307 09:52:51.396756    3129 out.go:239] * 
* 
W0307 09:52:51.406769    3129 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0307 09:52:51.411777    3129 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-951000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 status -v=7 --alsologtostderr
E0307 09:54:13.836932    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
E0307 09:55:16.508160    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
E0307 09:55:36.902167    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-951000 status -v=7 --alsologtostderr: exit status 7 (2m57.774522083s)

                                                
                                                
-- stdout --
	ha-951000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-951000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-951000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-951000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 09:52:51.463036    3133 out.go:291] Setting OutFile to fd 1 ...
	I0307 09:52:51.463320    3133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:52:51.463324    3133 out.go:304] Setting ErrFile to fd 2...
	I0307 09:52:51.463327    3133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:52:51.463461    3133 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 09:52:51.463733    3133 out.go:298] Setting JSON to false
	I0307 09:52:51.463748    3133 mustload.go:65] Loading cluster: ha-951000
	I0307 09:52:51.463774    3133 notify.go:220] Checking for updates...
	I0307 09:52:51.464029    3133 config.go:182] Loaded profile config "ha-951000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 09:52:51.464036    3133 status.go:255] checking status of ha-951000 ...
	I0307 09:52:51.465008    3133 status.go:330] ha-951000 host status = "Running" (err=<nil>)
	I0307 09:52:51.465017    3133 host.go:66] Checking if "ha-951000" exists ...
	I0307 09:52:51.465117    3133 host.go:66] Checking if "ha-951000" exists ...
	I0307 09:52:51.465247    3133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 09:52:51.465256    3133 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000/id_rsa Username:docker}
	W0307 09:52:51.465527    3133 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0307 09:52:51.465543    3133 retry.go:31] will retry after 310.223646ms: dial tcp 192.168.105.5:22: connect: host is down
	W0307 09:52:51.783004    3133 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0307 09:52:51.783043    3133 retry.go:31] will retry after 212.801639ms: dial tcp 192.168.105.5:22: connect: host is down
	W0307 09:52:51.997582    3133 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0307 09:52:51.997642    3133 retry.go:31] will retry after 555.91971ms: dial tcp 192.168.105.5:22: connect: host is down
	W0307 09:52:52.556154    3133 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0307 09:52:52.556391    3133 retry.go:31] will retry after 153.693166ms: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0307 09:52:52.712268    3133 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000/id_rsa Username:docker}
	W0307 09:52:52.713222    3133 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0307 09:52:52.713267    3133 retry.go:31] will retry after 280.960337ms: dial tcp 192.168.105.5:22: connect: host is down
	W0307 09:52:52.995035    3133 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0307 09:52:52.995103    3133 retry.go:31] will retry after 254.12754ms: dial tcp 192.168.105.5:22: connect: host is down
	W0307 09:53:19.172960    3133 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0307 09:53:19.173022    3133 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0307 09:53:19.173031    3133 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0307 09:53:19.173039    3133 status.go:257] ha-951000 status: &{Name:ha-951000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 09:53:19.173049    3133 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0307 09:53:19.173052    3133 status.go:255] checking status of ha-951000-m02 ...
	I0307 09:53:19.173267    3133 status.go:330] ha-951000-m02 host status = "Stopped" (err=<nil>)
	I0307 09:53:19.173272    3133 status.go:343] host is not running, skipping remaining checks
	I0307 09:53:19.173274    3133 status.go:257] ha-951000-m02 status: &{Name:ha-951000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 09:53:19.173279    3133 status.go:255] checking status of ha-951000-m03 ...
	I0307 09:53:19.174842    3133 status.go:330] ha-951000-m03 host status = "Running" (err=<nil>)
	I0307 09:53:19.174849    3133 host.go:66] Checking if "ha-951000-m03" exists ...
	I0307 09:53:19.174965    3133 host.go:66] Checking if "ha-951000-m03" exists ...
	I0307 09:53:19.175083    3133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 09:53:19.175090    3133 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m03/id_rsa Username:docker}
	W0307 09:54:34.175879    3133 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0307 09:54:34.176038    3133 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0307 09:54:34.176066    3133 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0307 09:54:34.176080    3133 status.go:257] ha-951000-m03 status: &{Name:ha-951000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0307 09:54:34.176120    3133 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0307 09:54:34.176137    3133 status.go:255] checking status of ha-951000-m04 ...
	I0307 09:54:34.179933    3133 status.go:330] ha-951000-m04 host status = "Running" (err=<nil>)
	I0307 09:54:34.179956    3133 host.go:66] Checking if "ha-951000-m04" exists ...
	I0307 09:54:34.180304    3133 host.go:66] Checking if "ha-951000-m04" exists ...
	I0307 09:54:34.180725    3133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 09:54:34.180747    3133 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000-m04/id_rsa Username:docker}
	W0307 09:55:49.180623    3133 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0307 09:55:49.180832    3133 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0307 09:55:49.180871    3133 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0307 09:55:49.180891    3133 status.go:257] ha-951000-m04 status: &{Name:ha-951000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0307 09:55:49.180934    3133 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-951000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000: exit status 3 (26.000512917s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 09:56:15.180782    3162 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0307 09:56:15.180828    3162 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-951000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMutliControlPlane/serial/RestartSecondaryNode (208.90s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-951000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-951000 -v=7 --alsologtostderr
E0307 09:59:13.827802    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
E0307 10:00:16.499677    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-951000 -v=7 --alsologtostderr: (3m49.022634792s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-951000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-951000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.222716708s)

                                                
                                                
-- stdout --
	* [ha-951000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-951000" primary control-plane node in "ha-951000" cluster
	* Restarting existing qemu2 VM for "ha-951000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-951000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:01:24.285369    3268 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:01:24.285547    3268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:01:24.285551    3268 out.go:304] Setting ErrFile to fd 2...
	I0307 10:01:24.285554    3268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:01:24.285712    3268 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:01:24.286866    3268 out.go:298] Setting JSON to false
	I0307 10:01:24.307362    3268 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3656,"bootTime":1709830828,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:01:24.307422    3268 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:01:24.312802    3268 out.go:177] * [ha-951000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:01:24.319724    3268 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:01:24.323875    3268 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:01:24.319769    3268 notify.go:220] Checking for updates...
	I0307 10:01:24.327689    3268 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:01:24.330788    3268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:01:24.333766    3268 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:01:24.336797    3268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:01:24.340261    3268 config.go:182] Loaded profile config "ha-951000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:01:24.340313    3268 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:01:24.345755    3268 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 10:01:24.352756    3268 start.go:297] selected driver: qemu2
	I0307 10:01:24.352762    3268 start.go:901] validating driver "qemu2" against &{Name:ha-951000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.4 ClusterName:ha-951000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:01:24.352842    3268 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:01:24.355769    3268 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:01:24.355821    3268 cni.go:84] Creating CNI manager for ""
	I0307 10:01:24.355827    3268 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0307 10:01:24.355875    3268 start.go:340] cluster config:
	{Name:ha-951000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-951000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:01:24.361705    3268 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:01:24.369753    3268 out.go:177] * Starting "ha-951000" primary control-plane node in "ha-951000" cluster
	I0307 10:01:24.373740    3268 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:01:24.373753    3268 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:01:24.373763    3268 cache.go:56] Caching tarball of preloaded images
	I0307 10:01:24.373810    3268 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:01:24.373816    3268 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:01:24.373888    3268 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/ha-951000/config.json ...
	I0307 10:01:24.374370    3268 start.go:360] acquireMachinesLock for ha-951000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:01:24.374404    3268 start.go:364] duration metric: took 27.708µs to acquireMachinesLock for "ha-951000"
	I0307 10:01:24.374413    3268 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:01:24.374418    3268 fix.go:54] fixHost starting: 
	I0307 10:01:24.374535    3268 fix.go:112] recreateIfNeeded on ha-951000: state=Stopped err=<nil>
	W0307 10:01:24.374543    3268 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:01:24.377768    3268 out.go:177] * Restarting existing qemu2 VM for "ha-951000" ...
	I0307 10:01:24.385682    3268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:9d:0e:91:a4:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000/disk.qcow2
	I0307 10:01:24.388088    3268 main.go:141] libmachine: STDOUT: 
	I0307 10:01:24.388109    3268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:01:24.388139    3268 fix.go:56] duration metric: took 13.720375ms for fixHost
	I0307 10:01:24.388145    3268 start.go:83] releasing machines lock for "ha-951000", held for 13.736458ms
	W0307 10:01:24.388150    3268 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:01:24.388191    3268 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:01:24.388195    3268 start.go:728] Will try again in 5 seconds ...
	I0307 10:01:29.390197    3268 start.go:360] acquireMachinesLock for ha-951000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:01:29.390535    3268 start.go:364] duration metric: took 262.958µs to acquireMachinesLock for "ha-951000"
	I0307 10:01:29.390678    3268 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:01:29.390699    3268 fix.go:54] fixHost starting: 
	I0307 10:01:29.391338    3268 fix.go:112] recreateIfNeeded on ha-951000: state=Stopped err=<nil>
	W0307 10:01:29.391362    3268 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:01:29.395763    3268 out.go:177] * Restarting existing qemu2 VM for "ha-951000" ...
	I0307 10:01:29.397846    3268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:9d:0e:91:a4:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000/disk.qcow2
	I0307 10:01:29.407769    3268 main.go:141] libmachine: STDOUT: 
	I0307 10:01:29.407842    3268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:01:29.407928    3268 fix.go:56] duration metric: took 17.228834ms for fixHost
	I0307 10:01:29.407970    3268 start.go:83] releasing machines lock for "ha-951000", held for 17.412416ms
	W0307 10:01:29.408187    3268 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-951000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-951000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:01:29.414755    3268 out.go:177] 
	W0307 10:01:29.417695    3268 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:01:29.417713    3268 out.go:239] * 
	* 
	W0307 10:01:29.419464    3268 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:01:29.427722    3268 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-951000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-951000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000: exit status 7 (34.731166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-951000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-951000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.839916ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-951000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-951000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:01:29.575485    3280 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:01:29.575721    3280 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:01:29.575724    3280 out.go:304] Setting ErrFile to fd 2...
	I0307 10:01:29.575727    3280 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:01:29.575864    3280 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:01:29.576100    3280 mustload.go:65] Loading cluster: ha-951000
	I0307 10:01:29.576318    3280 config.go:182] Loaded profile config "ha-951000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W0307 10:01:29.576625    3280 out.go:239] ! The control-plane node ha-951000 host is not running (will try others): state=Stopped
	! The control-plane node ha-951000 host is not running (will try others): state=Stopped
	W0307 10:01:29.576756    3280 out.go:239] ! The control-plane node ha-951000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-951000-m02 host is not running (will try others): state=Stopped
	I0307 10:01:29.581321    3280 out.go:177] * The control-plane node ha-951000-m03 host is not running: state=Stopped
	I0307 10:01:29.584178    3280 out.go:177]   To start a cluster, run: "minikube start -p ha-951000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-951000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-951000 status -v=7 --alsologtostderr: exit status 7 (31.195584ms)

                                                
                                                
-- stdout --
	ha-951000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-951000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-951000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-951000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:01:29.617368    3282 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:01:29.617528    3282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:01:29.617531    3282 out.go:304] Setting ErrFile to fd 2...
	I0307 10:01:29.617534    3282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:01:29.617664    3282 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:01:29.617783    3282 out.go:298] Setting JSON to false
	I0307 10:01:29.617795    3282 mustload.go:65] Loading cluster: ha-951000
	I0307 10:01:29.617856    3282 notify.go:220] Checking for updates...
	I0307 10:01:29.618052    3282 config.go:182] Loaded profile config "ha-951000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:01:29.618058    3282 status.go:255] checking status of ha-951000 ...
	I0307 10:01:29.618257    3282 status.go:330] ha-951000 host status = "Stopped" (err=<nil>)
	I0307 10:01:29.618260    3282 status.go:343] host is not running, skipping remaining checks
	I0307 10:01:29.618262    3282 status.go:257] ha-951000 status: &{Name:ha-951000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 10:01:29.618273    3282 status.go:255] checking status of ha-951000-m02 ...
	I0307 10:01:29.618362    3282 status.go:330] ha-951000-m02 host status = "Stopped" (err=<nil>)
	I0307 10:01:29.618365    3282 status.go:343] host is not running, skipping remaining checks
	I0307 10:01:29.618367    3282 status.go:257] ha-951000-m02 status: &{Name:ha-951000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 10:01:29.618371    3282 status.go:255] checking status of ha-951000-m03 ...
	I0307 10:01:29.618457    3282 status.go:330] ha-951000-m03 host status = "Stopped" (err=<nil>)
	I0307 10:01:29.618459    3282 status.go:343] host is not running, skipping remaining checks
	I0307 10:01:29.618461    3282 status.go:257] ha-951000-m03 status: &{Name:ha-951000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 10:01:29.618465    3282 status.go:255] checking status of ha-951000-m04 ...
	I0307 10:01:29.618560    3282 status.go:330] ha-951000-m04 host status = "Stopped" (err=<nil>)
	I0307 10:01:29.618563    3282 status.go:343] host is not running, skipping remaining checks
	I0307 10:01:29.618565    3282 status.go:257] ha-951000-m04 status: &{Name:ha-951000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-951000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000: exit status 7 (31.6155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-951000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.1s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-951000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-951000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-951000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-951000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountI
P\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000: exit status 7 (31.362416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-951000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.10s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (202.07s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 stop -v=7 --alsologtostderr
E0307 10:01:39.564862    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
E0307 10:04:13.817978    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-951000 stop -v=7 --alsologtostderr: (3m21.973185375s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-951000 status -v=7 --alsologtostderr: exit status 7 (66.463958ms)

                                                
                                                
-- stdout --
	ha-951000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-951000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-951000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-951000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:04:51.783672    3338 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:04:51.783844    3338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:04:51.783849    3338 out.go:304] Setting ErrFile to fd 2...
	I0307 10:04:51.783851    3338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:04:51.784008    3338 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:04:51.784173    3338 out.go:298] Setting JSON to false
	I0307 10:04:51.784188    3338 mustload.go:65] Loading cluster: ha-951000
	I0307 10:04:51.784227    3338 notify.go:220] Checking for updates...
	I0307 10:04:51.784496    3338 config.go:182] Loaded profile config "ha-951000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:04:51.784509    3338 status.go:255] checking status of ha-951000 ...
	I0307 10:04:51.784763    3338 status.go:330] ha-951000 host status = "Stopped" (err=<nil>)
	I0307 10:04:51.784768    3338 status.go:343] host is not running, skipping remaining checks
	I0307 10:04:51.784771    3338 status.go:257] ha-951000 status: &{Name:ha-951000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 10:04:51.784782    3338 status.go:255] checking status of ha-951000-m02 ...
	I0307 10:04:51.784902    3338 status.go:330] ha-951000-m02 host status = "Stopped" (err=<nil>)
	I0307 10:04:51.784906    3338 status.go:343] host is not running, skipping remaining checks
	I0307 10:04:51.784908    3338 status.go:257] ha-951000-m02 status: &{Name:ha-951000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 10:04:51.784913    3338 status.go:255] checking status of ha-951000-m03 ...
	I0307 10:04:51.785029    3338 status.go:330] ha-951000-m03 host status = "Stopped" (err=<nil>)
	I0307 10:04:51.785032    3338 status.go:343] host is not running, skipping remaining checks
	I0307 10:04:51.785034    3338 status.go:257] ha-951000-m03 status: &{Name:ha-951000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 10:04:51.785038    3338 status.go:255] checking status of ha-951000-m04 ...
	I0307 10:04:51.785151    3338 status.go:330] ha-951000-m04 host status = "Stopped" (err=<nil>)
	I0307 10:04:51.785155    3338 status.go:343] host is not running, skipping remaining checks
	I0307 10:04:51.785157    3338 status.go:257] ha-951000-m04 status: &{Name:ha-951000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-951000 status -v=7 --alsologtostderr": ha-951000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-951000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-951000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-951000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-951000 status -v=7 --alsologtostderr": ha-951000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-951000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-951000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-951000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-951000 status -v=7 --alsologtostderr": ha-951000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-951000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-951000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-951000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000: exit status 7 (33.0075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-951000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/StopCluster (202.07s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-951000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-951000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.180908166s)

                                                
                                                
-- stdout --
	* [ha-951000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-951000" primary control-plane node in "ha-951000" cluster
	* Restarting existing qemu2 VM for "ha-951000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-951000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:04:51.848616    3342 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:04:51.848752    3342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:04:51.848756    3342 out.go:304] Setting ErrFile to fd 2...
	I0307 10:04:51.848758    3342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:04:51.848911    3342 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:04:51.850014    3342 out.go:298] Setting JSON to false
	I0307 10:04:51.865998    3342 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3863,"bootTime":1709830828,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:04:51.866054    3342 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:04:51.871190    3342 out.go:177] * [ha-951000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:04:51.878160    3342 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:04:51.878219    3342 notify.go:220] Checking for updates...
	I0307 10:04:51.885162    3342 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:04:51.886480    3342 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:04:51.889114    3342 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:04:51.892190    3342 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:04:51.895200    3342 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:04:51.898558    3342 config.go:182] Loaded profile config "ha-951000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:04:51.898818    3342 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:04:51.903171    3342 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 10:04:51.914181    3342 start.go:297] selected driver: qemu2
	I0307 10:04:51.914189    3342 start.go:901] validating driver "qemu2" against &{Name:ha-951000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.4 ClusterName:ha-951000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:04:51.914264    3342 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:04:51.916624    3342 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:04:51.916672    3342 cni.go:84] Creating CNI manager for ""
	I0307 10:04:51.916678    3342 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0307 10:04:51.916736    3342 start.go:340] cluster config:
	{Name:ha-951000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-951000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:04:51.921430    3342 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:04:51.930149    3342 out.go:177] * Starting "ha-951000" primary control-plane node in "ha-951000" cluster
	I0307 10:04:51.934132    3342 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:04:51.934148    3342 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:04:51.934162    3342 cache.go:56] Caching tarball of preloaded images
	I0307 10:04:51.934222    3342 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:04:51.934229    3342 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:04:51.934319    3342 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/ha-951000/config.json ...
	I0307 10:04:51.934807    3342 start.go:360] acquireMachinesLock for ha-951000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:04:51.934841    3342 start.go:364] duration metric: took 28µs to acquireMachinesLock for "ha-951000"
	I0307 10:04:51.934850    3342 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:04:51.934857    3342 fix.go:54] fixHost starting: 
	I0307 10:04:51.934983    3342 fix.go:112] recreateIfNeeded on ha-951000: state=Stopped err=<nil>
	W0307 10:04:51.934994    3342 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:04:51.939076    3342 out.go:177] * Restarting existing qemu2 VM for "ha-951000" ...
	I0307 10:04:51.947166    3342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:9d:0e:91:a4:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000/disk.qcow2
	I0307 10:04:51.949371    3342 main.go:141] libmachine: STDOUT: 
	I0307 10:04:51.949396    3342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:04:51.949428    3342 fix.go:56] duration metric: took 14.571375ms for fixHost
	I0307 10:04:51.949435    3342 start.go:83] releasing machines lock for "ha-951000", held for 14.589125ms
	W0307 10:04:51.949441    3342 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:04:51.949483    3342 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:04:51.949490    3342 start.go:728] Will try again in 5 seconds ...
	I0307 10:04:56.951489    3342 start.go:360] acquireMachinesLock for ha-951000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:04:56.951810    3342 start.go:364] duration metric: took 227.541µs to acquireMachinesLock for "ha-951000"
	I0307 10:04:56.951942    3342 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:04:56.951962    3342 fix.go:54] fixHost starting: 
	I0307 10:04:56.952507    3342 fix.go:112] recreateIfNeeded on ha-951000: state=Stopped err=<nil>
	W0307 10:04:56.952534    3342 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:04:56.956124    3342 out.go:177] * Restarting existing qemu2 VM for "ha-951000" ...
	I0307 10:04:56.960162    3342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:9d:0e:91:a4:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/ha-951000/disk.qcow2
	I0307 10:04:56.968152    3342 main.go:141] libmachine: STDOUT: 
	I0307 10:04:56.968221    3342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:04:56.968312    3342 fix.go:56] duration metric: took 16.354667ms for fixHost
	I0307 10:04:56.968337    3342 start.go:83] releasing machines lock for "ha-951000", held for 16.502833ms
	W0307 10:04:56.968523    3342 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-951000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-951000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:04:56.975063    3342 out.go:177] 
	W0307 10:04:56.978191    3342 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:04:56.978217    3342 out.go:239] * 
	* 
	W0307 10:04:56.980548    3342 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:04:56.988034    3342 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-951000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000: exit status 7 (69.989292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-951000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-951000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-951000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-951000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-951000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountI
P\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000: exit status 7 (30.987334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-951000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-951000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-951000 --control-plane -v=7 --alsologtostderr: exit status 83 (46.561917ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-951000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-951000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:04:57.209252    3360 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:04:57.209605    3360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:04:57.209608    3360 out.go:304] Setting ErrFile to fd 2...
	I0307 10:04:57.209610    3360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:04:57.209761    3360 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:04:57.210020    3360 mustload.go:65] Loading cluster: ha-951000
	I0307 10:04:57.210213    3360 config.go:182] Loaded profile config "ha-951000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W0307 10:04:57.210507    3360 out.go:239] ! The control-plane node ha-951000 host is not running (will try others): state=Stopped
	! The control-plane node ha-951000 host is not running (will try others): state=Stopped
	W0307 10:04:57.210617    3360 out.go:239] ! The control-plane node ha-951000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-951000-m02 host is not running (will try others): state=Stopped
	I0307 10:04:57.214464    3360 out.go:177] * The control-plane node ha-951000-m03 host is not running: state=Stopped
	I0307 10:04:57.221399    3360 out.go:177]   To start a cluster, run: "minikube start -p ha-951000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-951000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-951000 -n ha-951000: exit status 7 (31.454167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-951000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-885000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-885000 --driver=qemu2 : exit status 80 (10.0137065s)

                                                
                                                
-- stdout --
	* [image-885000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-885000" primary control-plane node in "image-885000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-885000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-885000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-885000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-885000 -n image-885000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-885000 -n image-885000: exit status 7 (71.378125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-885000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.09s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-564000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E0307 10:05:16.489660    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-564000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.788119958s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e4c3206d-5356-4ffb-aa77-727d06a6fa5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-564000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"564496e2-d187-4991-ba6a-d9925212ebe6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18241"}}
	{"specversion":"1.0","id":"0023c2c3-a85f-46c7-ad6c-4c73502b5f38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig"}}
	{"specversion":"1.0","id":"a7bbfb86-21f2-4eab-af8c-8a2f2a5d9c54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c53bdffc-02a3-4cd7-a77a-9563d68f16c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e97110ac-440d-480a-a48b-5b0fa72d48c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube"}}
	{"specversion":"1.0","id":"4e17d1f6-d5b9-42b3-9d2c-2867e819cef3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f0fd8ee8-59b0-4866-8440-71350d652b6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"af33f062-0a14-4d92-b44d-28e7ecbb20c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"fa05a626-a7c7-45ef-ac67-bae458d060e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-564000\" primary control-plane node in \"json-output-564000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3974ff9f-f3c7-4875-9207-f4f4fa3af42b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"0d72db79-6d66-47bd-8a12-b29551d335c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-564000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"5044ae61-ac69-46ca-aa73-9fb48f6d18fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"b8e05bba-03ab-4157-9627-3e0b9254ece1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"0b1a850c-7b85-44d1-a6b5-e54f04090c5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-564000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"14d0f6a8-8d88-40ee-a797-b1650d14fed9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"49bb9bcf-e891-4ca7-bde1-d14bcef2a699","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-564000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-564000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-564000 --output=json --user=testUser: exit status 83 (81.813791ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1068f3da-5980-4e07-9bc4-811e1184bff7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-564000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"3a60a225-df09-43f7-a183-f82ca3c644f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-564000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-564000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-564000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-564000 --output=json --user=testUser: exit status 83 (47.246958ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-564000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-564000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-564000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-564000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.33s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-230000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-230000 --driver=qemu2 : exit status 80 (9.881509084s)

                                                
                                                
-- stdout --
	* [first-230000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-230000" primary control-plane node in "first-230000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-230000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-230000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-230000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-07 10:05:30.074002 -0800 PST m=+2193.887469751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-231000 -n second-231000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-231000 -n second-231000: exit status 85 (80.926792ms)

                                                
                                                
-- stdout --
	* Profile "second-231000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-231000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-231000" host is not running, skipping log retrieval (state="* Profile \"second-231000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-231000\"")
helpers_test.go:175: Cleaning up "second-231000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-231000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-07 10:05:30.386601 -0800 PST m=+2194.200078584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-230000 -n first-230000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-230000 -n first-230000: exit status 7 (31.570417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-230000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-230000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-230000
--- FAIL: TestMinikubeProfile (10.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-569000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-569000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.55022325s)

                                                
                                                
-- stdout --
	* [mount-start-1-569000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-569000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-569000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-569000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-569000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-569000 -n mount-start-1-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-569000 -n mount-start-1-569000: exit status 7 (69.175667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-569000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.62s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-606000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-606000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.764770833s)

                                                
                                                
-- stdout --
	* [multinode-606000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-606000" primary control-plane node in "multinode-606000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-606000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:05:41.507128    3524 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:05:41.507256    3524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:05:41.507259    3524 out.go:304] Setting ErrFile to fd 2...
	I0307 10:05:41.507261    3524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:05:41.507398    3524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:05:41.508495    3524 out.go:298] Setting JSON to false
	I0307 10:05:41.524414    3524 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3913,"bootTime":1709830828,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:05:41.524477    3524 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:05:41.529656    3524 out.go:177] * [multinode-606000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:05:41.537630    3524 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:05:41.537682    3524 notify.go:220] Checking for updates...
	I0307 10:05:41.541671    3524 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:05:41.544746    3524 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:05:41.547632    3524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:05:41.550680    3524 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:05:41.553586    3524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:05:41.556767    3524 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:05:41.560653    3524 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:05:41.567655    3524 start.go:297] selected driver: qemu2
	I0307 10:05:41.567661    3524 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:05:41.567666    3524 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:05:41.569913    3524 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:05:41.572681    3524 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:05:41.575725    3524 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:05:41.575769    3524 cni.go:84] Creating CNI manager for ""
	I0307 10:05:41.575775    3524 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0307 10:05:41.575779    3524 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 10:05:41.575819    3524 start.go:340] cluster config:
	{Name:multinode-606000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-606000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:05:41.580279    3524 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:05:41.587744    3524 out.go:177] * Starting "multinode-606000" primary control-plane node in "multinode-606000" cluster
	I0307 10:05:41.591587    3524 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:05:41.591611    3524 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:05:41.591623    3524 cache.go:56] Caching tarball of preloaded images
	I0307 10:05:41.591678    3524 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:05:41.591684    3524 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:05:41.591900    3524 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/multinode-606000/config.json ...
	I0307 10:05:41.591911    3524 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/multinode-606000/config.json: {Name:mka195f021321537b4a14b05aa00eaa638f81f1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:05:41.592124    3524 start.go:360] acquireMachinesLock for multinode-606000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:05:41.592156    3524 start.go:364] duration metric: took 25.417µs to acquireMachinesLock for "multinode-606000"
	I0307 10:05:41.592166    3524 start.go:93] Provisioning new machine with config: &{Name:multinode-606000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-606000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:05:41.592194    3524 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:05:41.600647    3524 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 10:05:41.617908    3524 start.go:159] libmachine.API.Create for "multinode-606000" (driver="qemu2")
	I0307 10:05:41.617930    3524 client.go:168] LocalClient.Create starting
	I0307 10:05:41.617976    3524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:05:41.618016    3524 main.go:141] libmachine: Decoding PEM data...
	I0307 10:05:41.618027    3524 main.go:141] libmachine: Parsing certificate...
	I0307 10:05:41.618073    3524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:05:41.618094    3524 main.go:141] libmachine: Decoding PEM data...
	I0307 10:05:41.618105    3524 main.go:141] libmachine: Parsing certificate...
	I0307 10:05:41.618450    3524 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:05:41.752830    3524 main.go:141] libmachine: Creating SSH key...
	I0307 10:05:41.803952    3524 main.go:141] libmachine: Creating Disk image...
	I0307 10:05:41.803964    3524 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:05:41.804132    3524 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/disk.qcow2
	I0307 10:05:41.816430    3524 main.go:141] libmachine: STDOUT: 
	I0307 10:05:41.816452    3524 main.go:141] libmachine: STDERR: 
	I0307 10:05:41.816512    3524 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/disk.qcow2 +20000M
	I0307 10:05:41.827118    3524 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:05:41.827134    3524 main.go:141] libmachine: STDERR: 
	I0307 10:05:41.827150    3524 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/disk.qcow2
	I0307 10:05:41.827154    3524 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:05:41.827186    3524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:3d:ad:3b:62:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/disk.qcow2
	I0307 10:05:41.828734    3524 main.go:141] libmachine: STDOUT: 
	I0307 10:05:41.828750    3524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:05:41.828774    3524 client.go:171] duration metric: took 210.846209ms to LocalClient.Create
	I0307 10:05:43.830945    3524 start.go:128] duration metric: took 2.238799375s to createHost
	I0307 10:05:43.831049    3524 start.go:83] releasing machines lock for "multinode-606000", held for 2.238958833s
	W0307 10:05:43.831097    3524 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:05:43.845821    3524 out.go:177] * Deleting "multinode-606000" in qemu2 ...
	W0307 10:05:43.871334    3524 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:05:43.871370    3524 start.go:728] Will try again in 5 seconds ...
	I0307 10:05:48.872830    3524 start.go:360] acquireMachinesLock for multinode-606000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:05:48.873303    3524 start.go:364] duration metric: took 348.959µs to acquireMachinesLock for "multinode-606000"
	I0307 10:05:48.873452    3524 start.go:93] Provisioning new machine with config: &{Name:multinode-606000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-606000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:05:48.873718    3524 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:05:48.885409    3524 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 10:05:48.935196    3524 start.go:159] libmachine.API.Create for "multinode-606000" (driver="qemu2")
	I0307 10:05:48.935247    3524 client.go:168] LocalClient.Create starting
	I0307 10:05:48.935360    3524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:05:48.935412    3524 main.go:141] libmachine: Decoding PEM data...
	I0307 10:05:48.935452    3524 main.go:141] libmachine: Parsing certificate...
	I0307 10:05:48.935516    3524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:05:48.935566    3524 main.go:141] libmachine: Decoding PEM data...
	I0307 10:05:48.935577    3524 main.go:141] libmachine: Parsing certificate...
	I0307 10:05:48.936098    3524 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:05:49.081060    3524 main.go:141] libmachine: Creating SSH key...
	I0307 10:05:49.171380    3524 main.go:141] libmachine: Creating Disk image...
	I0307 10:05:49.171385    3524 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:05:49.171545    3524 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/disk.qcow2
	I0307 10:05:49.184199    3524 main.go:141] libmachine: STDOUT: 
	I0307 10:05:49.184219    3524 main.go:141] libmachine: STDERR: 
	I0307 10:05:49.184278    3524 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/disk.qcow2 +20000M
	I0307 10:05:49.194944    3524 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:05:49.194965    3524 main.go:141] libmachine: STDERR: 
	I0307 10:05:49.194984    3524 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/disk.qcow2
	I0307 10:05:49.194990    3524 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:05:49.195034    3524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:16:6b:1f:7e:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/disk.qcow2
	I0307 10:05:49.196823    3524 main.go:141] libmachine: STDOUT: 
	I0307 10:05:49.196843    3524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:05:49.196855    3524 client.go:171] duration metric: took 261.611667ms to LocalClient.Create
	I0307 10:05:51.198973    3524 start.go:128] duration metric: took 2.325301625s to createHost
	I0307 10:05:51.199079    3524 start.go:83] releasing machines lock for "multinode-606000", held for 2.325823708s
	W0307 10:05:51.199473    3524 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-606000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-606000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:05:51.209055    3524 out.go:177] 
	W0307 10:05:51.215165    3524 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:05:51.215190    3524 out.go:239] * 
	* 
	W0307 10:05:51.217114    3524 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:05:51.227018    3524 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-606000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000: exit status 7 (67.079417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (106.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-606000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-606000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (134.468583ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-606000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-606000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-606000 -- rollout status deployment/busybox: exit status 1 (59.11375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-606000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.758625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-606000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.163583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-606000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.44075ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-606000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.813375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-606000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.212167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-606000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.634958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-606000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.19075ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-606000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.221584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-606000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.390417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-606000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.264916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-606000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.340291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-606000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.414959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-606000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-606000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-606000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.215667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-606000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-606000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-606000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.70575ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-606000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-606000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-606000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.883709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-606000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000: exit status 7 (31.043458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (106.83s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-606000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.762083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-606000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000: exit status 7 (31.157291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-606000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-606000 -v 3 --alsologtostderr: exit status 83 (45.41875ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-606000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:07:38.253345    3625 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:07:38.253505    3625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:38.253508    3625 out.go:304] Setting ErrFile to fd 2...
	I0307 10:07:38.253510    3625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:38.253640    3625 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:07:38.253877    3625 mustload.go:65] Loading cluster: multinode-606000
	I0307 10:07:38.254047    3625 config.go:182] Loaded profile config "multinode-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:07:38.260073    3625 out.go:177] * The control-plane node multinode-606000 host is not running: state=Stopped
	I0307 10:07:38.264879    3625 out.go:177]   To start a cluster, run: "minikube start -p multinode-606000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-606000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000: exit status 7 (31.101625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-606000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-606000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (529.870583ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-606000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-606000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-606000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000: exit status 7 (32.432541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.56s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-606000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-606000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-606000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-606000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000: exit status 7 (31.092417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-606000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-606000 status --output json --alsologtostderr: exit status 7 (31.555792ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-606000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:07:38.997351    3638 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:07:38.997491    3638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:38.997495    3638 out.go:304] Setting ErrFile to fd 2...
	I0307 10:07:38.997497    3638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:38.997619    3638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:07:38.997730    3638 out.go:298] Setting JSON to true
	I0307 10:07:38.997745    3638 mustload.go:65] Loading cluster: multinode-606000
	I0307 10:07:38.997802    3638 notify.go:220] Checking for updates...
	I0307 10:07:38.997991    3638 config.go:182] Loaded profile config "multinode-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:07:38.997997    3638 status.go:255] checking status of multinode-606000 ...
	I0307 10:07:38.998188    3638 status.go:330] multinode-606000 host status = "Stopped" (err=<nil>)
	I0307 10:07:38.998192    3638 status.go:343] host is not running, skipping remaining checks
	I0307 10:07:38.998194    3638 status.go:257] multinode-606000 status: &{Name:multinode-606000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-606000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000: exit status 7 (30.811208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-606000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-606000 node stop m03: exit status 85 (48.445834ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-606000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-606000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-606000 status: exit status 7 (31.29625ms)

                                                
                                                
-- stdout --
	multinode-606000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-606000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-606000 status --alsologtostderr: exit status 7 (31.482667ms)

                                                
                                                
-- stdout --
	multinode-606000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:07:39.140224    3646 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:07:39.140353    3646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:39.140356    3646 out.go:304] Setting ErrFile to fd 2...
	I0307 10:07:39.140358    3646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:39.140498    3646 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:07:39.140623    3646 out.go:298] Setting JSON to false
	I0307 10:07:39.140635    3646 mustload.go:65] Loading cluster: multinode-606000
	I0307 10:07:39.140699    3646 notify.go:220] Checking for updates...
	I0307 10:07:39.140822    3646 config.go:182] Loaded profile config "multinode-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:07:39.140828    3646 status.go:255] checking status of multinode-606000 ...
	I0307 10:07:39.141050    3646 status.go:330] multinode-606000 host status = "Stopped" (err=<nil>)
	I0307 10:07:39.141055    3646 status.go:343] host is not running, skipping remaining checks
	I0307 10:07:39.141057    3646 status.go:257] multinode-606000 status: &{Name:multinode-606000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-606000 status --alsologtostderr": multinode-606000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000: exit status 7 (30.717084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (51.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-606000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-606000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.978125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:07:39.202782    3650 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:07:39.203006    3650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:39.203009    3650 out.go:304] Setting ErrFile to fd 2...
	I0307 10:07:39.203011    3650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:39.203154    3650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:07:39.203384    3650 mustload.go:65] Loading cluster: multinode-606000
	I0307 10:07:39.203580    3650 config.go:182] Loaded profile config "multinode-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:07:39.208061    3650 out.go:177] 
	W0307 10:07:39.211125    3650 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0307 10:07:39.211130    3650 out.go:239] * 
	* 
	W0307 10:07:39.212804    3650 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:07:39.216051    3650 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0307 10:07:39.202782    3650 out.go:291] Setting OutFile to fd 1 ...
I0307 10:07:39.203006    3650 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 10:07:39.203009    3650 out.go:304] Setting ErrFile to fd 2...
I0307 10:07:39.203011    3650 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 10:07:39.203154    3650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
I0307 10:07:39.203384    3650 mustload.go:65] Loading cluster: multinode-606000
I0307 10:07:39.203580    3650 config.go:182] Loaded profile config "multinode-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 10:07:39.208061    3650 out.go:177] 
W0307 10:07:39.211125    3650 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0307 10:07:39.211130    3650 out.go:239] * 
* 
W0307 10:07:39.212804    3650 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0307 10:07:39.216051    3650 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-606000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-606000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-606000 status -v=7 --alsologtostderr: exit status 7 (31.317459ms)

                                                
                                                
-- stdout --
	multinode-606000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:07:39.250782    3652 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:07:39.250917    3652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:39.250920    3652 out.go:304] Setting ErrFile to fd 2...
	I0307 10:07:39.250923    3652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:39.251035    3652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:07:39.251158    3652 out.go:298] Setting JSON to false
	I0307 10:07:39.251170    3652 mustload.go:65] Loading cluster: multinode-606000
	I0307 10:07:39.251221    3652 notify.go:220] Checking for updates...
	I0307 10:07:39.251361    3652 config.go:182] Loaded profile config "multinode-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:07:39.251372    3652 status.go:255] checking status of multinode-606000 ...
	I0307 10:07:39.251591    3652 status.go:330] multinode-606000 host status = "Stopped" (err=<nil>)
	I0307 10:07:39.251595    3652 status.go:343] host is not running, skipping remaining checks
	I0307 10:07:39.251597    3652 status.go:257] multinode-606000 status: &{Name:multinode-606000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-606000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-606000 status -v=7 --alsologtostderr: exit status 7 (74.525625ms)

                                                
                                                
-- stdout --
	multinode-606000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:07:40.753653    3654 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:07:40.753846    3654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:40.753850    3654 out.go:304] Setting ErrFile to fd 2...
	I0307 10:07:40.753853    3654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:40.754023    3654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:07:40.754202    3654 out.go:298] Setting JSON to false
	I0307 10:07:40.754218    3654 mustload.go:65] Loading cluster: multinode-606000
	I0307 10:07:40.754254    3654 notify.go:220] Checking for updates...
	I0307 10:07:40.754456    3654 config.go:182] Loaded profile config "multinode-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:07:40.754465    3654 status.go:255] checking status of multinode-606000 ...
	I0307 10:07:40.754729    3654 status.go:330] multinode-606000 host status = "Stopped" (err=<nil>)
	I0307 10:07:40.754734    3654 status.go:343] host is not running, skipping remaining checks
	I0307 10:07:40.754737    3654 status.go:257] multinode-606000 status: &{Name:multinode-606000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-606000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-606000 status -v=7 --alsologtostderr: exit status 7 (75.266417ms)

                                                
                                                
-- stdout --
	multinode-606000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:07:42.597642    3656 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:07:42.597812    3656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:42.597817    3656 out.go:304] Setting ErrFile to fd 2...
	I0307 10:07:42.597820    3656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:42.597969    3656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:07:42.598124    3656 out.go:298] Setting JSON to false
	I0307 10:07:42.598140    3656 mustload.go:65] Loading cluster: multinode-606000
	I0307 10:07:42.598175    3656 notify.go:220] Checking for updates...
	I0307 10:07:42.598366    3656 config.go:182] Loaded profile config "multinode-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:07:42.598373    3656 status.go:255] checking status of multinode-606000 ...
	I0307 10:07:42.598638    3656 status.go:330] multinode-606000 host status = "Stopped" (err=<nil>)
	I0307 10:07:42.598643    3656 status.go:343] host is not running, skipping remaining checks
	I0307 10:07:42.598646    3656 status.go:257] multinode-606000 status: &{Name:multinode-606000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-606000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-606000 status -v=7 --alsologtostderr: exit status 7 (72.617542ms)

                                                
                                                
-- stdout --
	multinode-606000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:07:45.507516    3658 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:07:45.507656    3658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:45.507660    3658 out.go:304] Setting ErrFile to fd 2...
	I0307 10:07:45.507663    3658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:45.507812    3658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:07:45.507961    3658 out.go:298] Setting JSON to false
	I0307 10:07:45.507981    3658 mustload.go:65] Loading cluster: multinode-606000
	I0307 10:07:45.508011    3658 notify.go:220] Checking for updates...
	I0307 10:07:45.508217    3658 config.go:182] Loaded profile config "multinode-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:07:45.508225    3658 status.go:255] checking status of multinode-606000 ...
	I0307 10:07:45.508518    3658 status.go:330] multinode-606000 host status = "Stopped" (err=<nil>)
	I0307 10:07:45.508523    3658 status.go:343] host is not running, skipping remaining checks
	I0307 10:07:45.508526    3658 status.go:257] multinode-606000 status: &{Name:multinode-606000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-606000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-606000 status -v=7 --alsologtostderr: exit status 7 (75.345917ms)

                                                
                                                
-- stdout --
	multinode-606000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:07:48.694114    3660 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:07:48.694297    3660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:48.694301    3660 out.go:304] Setting ErrFile to fd 2...
	I0307 10:07:48.694305    3660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:48.694490    3660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:07:48.694660    3660 out.go:298] Setting JSON to false
	I0307 10:07:48.694676    3660 mustload.go:65] Loading cluster: multinode-606000
	I0307 10:07:48.694705    3660 notify.go:220] Checking for updates...
	I0307 10:07:48.694926    3660 config.go:182] Loaded profile config "multinode-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:07:48.694937    3660 status.go:255] checking status of multinode-606000 ...
	I0307 10:07:48.695217    3660 status.go:330] multinode-606000 host status = "Stopped" (err=<nil>)
	I0307 10:07:48.695222    3660 status.go:343] host is not running, skipping remaining checks
	I0307 10:07:48.695225    3660 status.go:257] multinode-606000 status: &{Name:multinode-606000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-606000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-606000 status -v=7 --alsologtostderr: exit status 7 (74.852125ms)

                                                
                                                
-- stdout --
	multinode-606000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:07:53.201440    3662 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:07:53.201603    3662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:53.201607    3662 out.go:304] Setting ErrFile to fd 2...
	I0307 10:07:53.201610    3662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:07:53.201761    3662 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:07:53.201922    3662 out.go:298] Setting JSON to false
	I0307 10:07:53.201938    3662 mustload.go:65] Loading cluster: multinode-606000
	I0307 10:07:53.201975    3662 notify.go:220] Checking for updates...
	I0307 10:07:53.202189    3662 config.go:182] Loaded profile config "multinode-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:07:53.202197    3662 status.go:255] checking status of multinode-606000 ...
	I0307 10:07:53.202451    3662 status.go:330] multinode-606000 host status = "Stopped" (err=<nil>)
	I0307 10:07:53.202456    3662 status.go:343] host is not running, skipping remaining checks
	I0307 10:07:53.202459    3662 status.go:257] multinode-606000 status: &{Name:multinode-606000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-606000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-606000 status -v=7 --alsologtostderr: exit status 7 (71.1135ms)

                                                
                                                
-- stdout --
	multinode-606000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:08:01.441352    3664 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:08:01.441517    3664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:08:01.441522    3664 out.go:304] Setting ErrFile to fd 2...
	I0307 10:08:01.441525    3664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:08:01.441723    3664 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:08:01.441911    3664 out.go:298] Setting JSON to false
	I0307 10:08:01.441928    3664 mustload.go:65] Loading cluster: multinode-606000
	I0307 10:08:01.441972    3664 notify.go:220] Checking for updates...
	I0307 10:08:01.442208    3664 config.go:182] Loaded profile config "multinode-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:08:01.442217    3664 status.go:255] checking status of multinode-606000 ...
	I0307 10:08:01.442521    3664 status.go:330] multinode-606000 host status = "Stopped" (err=<nil>)
	I0307 10:08:01.442526    3664 status.go:343] host is not running, skipping remaining checks
	I0307 10:08:01.442530    3664 status.go:257] multinode-606000 status: &{Name:multinode-606000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-606000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-606000 status -v=7 --alsologtostderr: exit status 7 (74.766958ms)

                                                
                                                
-- stdout --
	multinode-606000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:08:10.023754    3666 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:08:10.023901    3666 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:08:10.023905    3666 out.go:304] Setting ErrFile to fd 2...
	I0307 10:08:10.023908    3666 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:08:10.024067    3666 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:08:10.024210    3666 out.go:298] Setting JSON to false
	I0307 10:08:10.024225    3666 mustload.go:65] Loading cluster: multinode-606000
	I0307 10:08:10.024270    3666 notify.go:220] Checking for updates...
	I0307 10:08:10.024490    3666 config.go:182] Loaded profile config "multinode-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:08:10.024498    3666 status.go:255] checking status of multinode-606000 ...
	I0307 10:08:10.024762    3666 status.go:330] multinode-606000 host status = "Stopped" (err=<nil>)
	I0307 10:08:10.024767    3666 status.go:343] host is not running, skipping remaining checks
	I0307 10:08:10.024770    3666 status.go:257] multinode-606000 status: &{Name:multinode-606000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-606000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-606000 status -v=7 --alsologtostderr: exit status 7 (74.550042ms)

                                                
                                                
-- stdout --
	multinode-606000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:08:30.495978    3673 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:08:30.496175    3673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:08:30.496179    3673 out.go:304] Setting ErrFile to fd 2...
	I0307 10:08:30.496182    3673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:08:30.496352    3673 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:08:30.496529    3673 out.go:298] Setting JSON to false
	I0307 10:08:30.496544    3673 mustload.go:65] Loading cluster: multinode-606000
	I0307 10:08:30.496587    3673 notify.go:220] Checking for updates...
	I0307 10:08:30.496801    3673 config.go:182] Loaded profile config "multinode-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:08:30.496809    3673 status.go:255] checking status of multinode-606000 ...
	I0307 10:08:30.497072    3673 status.go:330] multinode-606000 host status = "Stopped" (err=<nil>)
	I0307 10:08:30.497077    3673 status.go:343] host is not running, skipping remaining checks
	I0307 10:08:30.497080    3673 status.go:257] multinode-606000 status: &{Name:multinode-606000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-606000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000: exit status 7 (34.238917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (51.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-606000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-606000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-606000: (3.855830792s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-606000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-606000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.2193755s)

                                                
                                                
-- stdout --
	* [multinode-606000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-606000" primary control-plane node in "multinode-606000" cluster
	* Restarting existing qemu2 VM for "multinode-606000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-606000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:08:34.482211    3699 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:08:34.482386    3699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:08:34.482391    3699 out.go:304] Setting ErrFile to fd 2...
	I0307 10:08:34.482393    3699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:08:34.482551    3699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:08:34.483789    3699 out.go:298] Setting JSON to false
	I0307 10:08:34.502329    3699 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4086,"bootTime":1709830828,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:08:34.502396    3699 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:08:34.507738    3699 out.go:177] * [multinode-606000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:08:34.515729    3699 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:08:34.515770    3699 notify.go:220] Checking for updates...
	I0307 10:08:34.518702    3699 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:08:34.521773    3699 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:08:34.524766    3699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:08:34.527692    3699 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:08:34.530723    3699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:08:34.534168    3699 config.go:182] Loaded profile config "multinode-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:08:34.534242    3699 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:08:34.538786    3699 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 10:08:34.545743    3699 start.go:297] selected driver: qemu2
	I0307 10:08:34.545750    3699 start.go:901] validating driver "qemu2" against &{Name:multinode-606000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-606000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:08:34.545814    3699 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:08:34.548228    3699 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:08:34.548281    3699 cni.go:84] Creating CNI manager for ""
	I0307 10:08:34.548286    3699 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0307 10:08:34.548330    3699 start.go:340] cluster config:
	{Name:multinode-606000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-606000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:08:34.552850    3699 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:08:34.559727    3699 out.go:177] * Starting "multinode-606000" primary control-plane node in "multinode-606000" cluster
	I0307 10:08:34.563599    3699 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:08:34.563615    3699 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:08:34.563630    3699 cache.go:56] Caching tarball of preloaded images
	I0307 10:08:34.563687    3699 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:08:34.563693    3699 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:08:34.563762    3699 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/multinode-606000/config.json ...
	I0307 10:08:34.564239    3699 start.go:360] acquireMachinesLock for multinode-606000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:08:34.564273    3699 start.go:364] duration metric: took 28.417µs to acquireMachinesLock for "multinode-606000"
	I0307 10:08:34.564282    3699 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:08:34.564288    3699 fix.go:54] fixHost starting: 
	I0307 10:08:34.564409    3699 fix.go:112] recreateIfNeeded on multinode-606000: state=Stopped err=<nil>
	W0307 10:08:34.564418    3699 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:08:34.568765    3699 out.go:177] * Restarting existing qemu2 VM for "multinode-606000" ...
	I0307 10:08:34.576784    3699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:16:6b:1f:7e:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/disk.qcow2
	I0307 10:08:34.578897    3699 main.go:141] libmachine: STDOUT: 
	I0307 10:08:34.578921    3699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:08:34.578953    3699 fix.go:56] duration metric: took 14.665417ms for fixHost
	I0307 10:08:34.578957    3699 start.go:83] releasing machines lock for "multinode-606000", held for 14.679542ms
	W0307 10:08:34.578964    3699 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:08:34.578995    3699 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:08:34.579000    3699 start.go:728] Will try again in 5 seconds ...
	I0307 10:08:39.581059    3699 start.go:360] acquireMachinesLock for multinode-606000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:08:39.581451    3699 start.go:364] duration metric: took 300.417µs to acquireMachinesLock for "multinode-606000"
	I0307 10:08:39.581579    3699 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:08:39.581600    3699 fix.go:54] fixHost starting: 
	I0307 10:08:39.582372    3699 fix.go:112] recreateIfNeeded on multinode-606000: state=Stopped err=<nil>
	W0307 10:08:39.582396    3699 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:08:39.587759    3699 out.go:177] * Restarting existing qemu2 VM for "multinode-606000" ...
	I0307 10:08:39.591859    3699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:16:6b:1f:7e:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/disk.qcow2
	I0307 10:08:39.599809    3699 main.go:141] libmachine: STDOUT: 
	I0307 10:08:39.599957    3699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:08:39.600008    3699 fix.go:56] duration metric: took 18.411541ms for fixHost
	I0307 10:08:39.600023    3699 start.go:83] releasing machines lock for "multinode-606000", held for 18.54775ms
	W0307 10:08:39.600207    3699 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-606000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-606000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:08:39.608504    3699 out.go:177] 
	W0307 10:08:39.612944    3699 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:08:39.612970    3699 out.go:239] * 
	* 
	W0307 10:08:39.615279    3699 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:08:39.625751    3699 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-606000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-606000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000: exit status 7 (34.716334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-606000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-606000 node delete m03: exit status 83 (41.7975ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-606000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-606000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-606000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-606000 status --alsologtostderr: exit status 7 (30.841167ms)

                                                
                                                
-- stdout --
	multinode-606000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:08:39.816169    3716 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:08:39.816328    3716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:08:39.816335    3716 out.go:304] Setting ErrFile to fd 2...
	I0307 10:08:39.816337    3716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:08:39.816482    3716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:08:39.816605    3716 out.go:298] Setting JSON to false
	I0307 10:08:39.816622    3716 mustload.go:65] Loading cluster: multinode-606000
	I0307 10:08:39.816675    3716 notify.go:220] Checking for updates...
	I0307 10:08:39.816806    3716 config.go:182] Loaded profile config "multinode-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:08:39.816811    3716 status.go:255] checking status of multinode-606000 ...
	I0307 10:08:39.817019    3716 status.go:330] multinode-606000 host status = "Stopped" (err=<nil>)
	I0307 10:08:39.817023    3716 status.go:343] host is not running, skipping remaining checks
	I0307 10:08:39.817025    3716 status.go:257] multinode-606000 status: &{Name:multinode-606000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-606000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000: exit status 7 (31.941917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-606000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-606000 stop: (3.369928583s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-606000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-606000 status: exit status 7 (68.098709ms)

                                                
                                                
-- stdout --
	multinode-606000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-606000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-606000 status --alsologtostderr: exit status 7 (34.136083ms)

                                                
                                                
-- stdout --
	multinode-606000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:08:43.320829    3740 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:08:43.320983    3740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:08:43.320986    3740 out.go:304] Setting ErrFile to fd 2...
	I0307 10:08:43.320989    3740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:08:43.321114    3740 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:08:43.321235    3740 out.go:298] Setting JSON to false
	I0307 10:08:43.321247    3740 mustload.go:65] Loading cluster: multinode-606000
	I0307 10:08:43.321307    3740 notify.go:220] Checking for updates...
	I0307 10:08:43.321483    3740 config.go:182] Loaded profile config "multinode-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:08:43.321489    3740 status.go:255] checking status of multinode-606000 ...
	I0307 10:08:43.321711    3740 status.go:330] multinode-606000 host status = "Stopped" (err=<nil>)
	I0307 10:08:43.321715    3740 status.go:343] host is not running, skipping remaining checks
	I0307 10:08:43.321718    3740 status.go:257] multinode-606000 status: &{Name:multinode-606000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-606000 status --alsologtostderr": multinode-606000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-606000 status --alsologtostderr": multinode-606000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000: exit status 7 (31.585417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-606000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-606000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.179287667s)

                                                
                                                
-- stdout --
	* [multinode-606000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-606000" primary control-plane node in "multinode-606000" cluster
	* Restarting existing qemu2 VM for "multinode-606000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-606000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:08:43.384178    3744 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:08:43.384325    3744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:08:43.384329    3744 out.go:304] Setting ErrFile to fd 2...
	I0307 10:08:43.384330    3744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:08:43.384451    3744 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:08:43.385460    3744 out.go:298] Setting JSON to false
	I0307 10:08:43.401169    3744 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4095,"bootTime":1709830828,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:08:43.401234    3744 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:08:43.405696    3744 out.go:177] * [multinode-606000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:08:43.413304    3744 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:08:43.413347    3744 notify.go:220] Checking for updates...
	I0307 10:08:43.418297    3744 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:08:43.421315    3744 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:08:43.424370    3744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:08:43.427353    3744 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:08:43.430331    3744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:08:43.433587    3744 config.go:182] Loaded profile config "multinode-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:08:43.433848    3744 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:08:43.438314    3744 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 10:08:43.445305    3744 start.go:297] selected driver: qemu2
	I0307 10:08:43.445313    3744 start.go:901] validating driver "qemu2" against &{Name:multinode-606000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-606000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:08:43.445375    3744 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:08:43.447623    3744 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:08:43.447663    3744 cni.go:84] Creating CNI manager for ""
	I0307 10:08:43.447669    3744 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0307 10:08:43.447720    3744 start.go:340] cluster config:
	{Name:multinode-606000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-606000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:08:43.452004    3744 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:08:43.459343    3744 out.go:177] * Starting "multinode-606000" primary control-plane node in "multinode-606000" cluster
	I0307 10:08:43.462367    3744 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:08:43.462385    3744 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:08:43.462396    3744 cache.go:56] Caching tarball of preloaded images
	I0307 10:08:43.462443    3744 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:08:43.462449    3744 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:08:43.462529    3744 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/multinode-606000/config.json ...
	I0307 10:08:43.462990    3744 start.go:360] acquireMachinesLock for multinode-606000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:08:43.463017    3744 start.go:364] duration metric: took 21.042µs to acquireMachinesLock for "multinode-606000"
	I0307 10:08:43.463025    3744 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:08:43.463031    3744 fix.go:54] fixHost starting: 
	I0307 10:08:43.463146    3744 fix.go:112] recreateIfNeeded on multinode-606000: state=Stopped err=<nil>
	W0307 10:08:43.463155    3744 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:08:43.467292    3744 out.go:177] * Restarting existing qemu2 VM for "multinode-606000" ...
	I0307 10:08:43.474382    3744 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:16:6b:1f:7e:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/disk.qcow2
	I0307 10:08:43.476350    3744 main.go:141] libmachine: STDOUT: 
	I0307 10:08:43.476372    3744 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:08:43.476411    3744 fix.go:56] duration metric: took 13.379416ms for fixHost
	I0307 10:08:43.476417    3744 start.go:83] releasing machines lock for "multinode-606000", held for 13.396334ms
	W0307 10:08:43.476423    3744 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:08:43.476453    3744 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:08:43.476458    3744 start.go:728] Will try again in 5 seconds ...
	I0307 10:08:48.478501    3744 start.go:360] acquireMachinesLock for multinode-606000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:08:48.478942    3744 start.go:364] duration metric: took 333.625µs to acquireMachinesLock for "multinode-606000"
	I0307 10:08:48.479072    3744 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:08:48.479092    3744 fix.go:54] fixHost starting: 
	I0307 10:08:48.479805    3744 fix.go:112] recreateIfNeeded on multinode-606000: state=Stopped err=<nil>
	W0307 10:08:48.479830    3744 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:08:48.483642    3744 out.go:177] * Restarting existing qemu2 VM for "multinode-606000" ...
	I0307 10:08:48.490794    3744 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:16:6b:1f:7e:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/multinode-606000/disk.qcow2
	I0307 10:08:48.499286    3744 main.go:141] libmachine: STDOUT: 
	I0307 10:08:48.499346    3744 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:08:48.499413    3744 fix.go:56] duration metric: took 20.324084ms for fixHost
	I0307 10:08:48.499433    3744 start.go:83] releasing machines lock for "multinode-606000", held for 20.471417ms
	W0307 10:08:48.499577    3744 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-606000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-606000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:08:48.506381    3744 out.go:177] 
	W0307 10:08:48.509694    3744 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:08:48.509719    3744 out.go:239] * 
	* 
	W0307 10:08:48.512385    3744 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:08:48.519524    3744 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-606000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000: exit status 7 (68.458375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-606000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-606000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-606000-m01 --driver=qemu2 : exit status 80 (9.828425459s)

                                                
                                                
-- stdout --
	* [multinode-606000-m01] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-606000-m01" primary control-plane node in "multinode-606000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-606000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-606000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-606000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-606000-m02 --driver=qemu2 : exit status 80 (9.912442458s)

                                                
                                                
-- stdout --
	* [multinode-606000-m02] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-606000-m02" primary control-plane node in "multinode-606000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-606000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-606000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-606000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-606000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-606000: exit status 83 (85.083916ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-606000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-606000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-606000 -n multinode-606000: exit status 7 (32.498042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.00s)

                                                
                                    
x
+
TestPreload (10.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-938000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E0307 10:09:13.808097    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-938000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.841918792s)

                                                
                                                
-- stdout --
	* [test-preload-938000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-938000" primary control-plane node in "test-preload-938000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-938000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:09:08.765231    3801 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:09:08.765370    3801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:09:08.765373    3801 out.go:304] Setting ErrFile to fd 2...
	I0307 10:09:08.765376    3801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:09:08.765491    3801 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:09:08.766523    3801 out.go:298] Setting JSON to false
	I0307 10:09:08.782468    3801 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4120,"bootTime":1709830828,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:09:08.782519    3801 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:09:08.787045    3801 out.go:177] * [test-preload-938000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:09:08.794935    3801 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:09:08.799917    3801 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:09:08.794984    3801 notify.go:220] Checking for updates...
	I0307 10:09:08.802907    3801 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:09:08.805835    3801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:09:08.808870    3801 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:09:08.810362    3801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:09:08.814190    3801 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:09:08.814242    3801 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:09:08.817877    3801 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:09:08.822860    3801 start.go:297] selected driver: qemu2
	I0307 10:09:08.822868    3801 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:09:08.822874    3801 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:09:08.825070    3801 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:09:08.827896    3801 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:09:08.830986    3801 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:09:08.831047    3801 cni.go:84] Creating CNI manager for ""
	I0307 10:09:08.831054    3801 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:09:08.831058    3801 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 10:09:08.831096    3801 start.go:340] cluster config:
	{Name:test-preload-938000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-938000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:09:08.835626    3801 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:09:08.842933    3801 out.go:177] * Starting "test-preload-938000" primary control-plane node in "test-preload-938000" cluster
	I0307 10:09:08.846875    3801 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0307 10:09:08.846960    3801 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/test-preload-938000/config.json ...
	I0307 10:09:08.846983    3801 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/test-preload-938000/config.json: {Name:mk04cfaebdc64ac74d1d56f58101cc410c67df32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:09:08.847024    3801 cache.go:107] acquiring lock: {Name:mk2355d8ccac63c405f2c5c7b9ec676af4c8285b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:09:08.847037    3801 cache.go:107] acquiring lock: {Name:mk55b0c5ddedbe4e05f714622b37932bb306454f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:09:08.847044    3801 cache.go:107] acquiring lock: {Name:mkbc233cab5b50f82cde6b191b89552605094f40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:09:08.847113    3801 cache.go:107] acquiring lock: {Name:mk0fde44536df7e501d06409410856aad3132870 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:09:08.847233    3801 start.go:360] acquireMachinesLock for test-preload-938000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:09:08.847246    3801 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0307 10:09:08.847253    3801 cache.go:107] acquiring lock: {Name:mkfeb4d90f7ade04cf57f99ff2fea94cdb6fe134 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:09:08.847275    3801 start.go:364] duration metric: took 30.875µs to acquireMachinesLock for "test-preload-938000"
	I0307 10:09:08.847259    3801 cache.go:107] acquiring lock: {Name:mk5862399e509a3f767ce1f36a2c8019148059a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:09:08.847255    3801 cache.go:107] acquiring lock: {Name:mka25af1fd3b261a6b60984f17b64186f92685b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:09:08.847246    3801 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0307 10:09:08.847287    3801 start.go:93] Provisioning new machine with config: &{Name:test-preload-938000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-938000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:09:08.847325    3801 cache.go:107] acquiring lock: {Name:mkaa9e522e797cbd0075136f0b1dc8863c92fb9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:09:08.847398    3801 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0307 10:09:08.847498    3801 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0307 10:09:08.847502    3801 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0307 10:09:08.847242    3801 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:09:08.847523    3801 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0307 10:09:08.847343    3801 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:09:08.847599    3801 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 10:09:08.851747    3801 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 10:09:08.868190    3801 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0307 10:09:08.868215    3801 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0307 10:09:08.868215    3801 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:09:08.868261    3801 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0307 10:09:08.869068    3801 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 10:09:08.869112    3801 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0307 10:09:08.869362    3801 start.go:159] libmachine.API.Create for "test-preload-938000" (driver="qemu2")
	I0307 10:09:08.869394    3801 client.go:168] LocalClient.Create starting
	I0307 10:09:08.869465    3801 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:09:08.869497    3801 main.go:141] libmachine: Decoding PEM data...
	I0307 10:09:08.869504    3801 main.go:141] libmachine: Parsing certificate...
	I0307 10:09:08.869557    3801 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:09:08.869579    3801 main.go:141] libmachine: Decoding PEM data...
	I0307 10:09:08.869590    3801 main.go:141] libmachine: Parsing certificate...
	I0307 10:09:08.869897    3801 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:09:08.870771    3801 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0307 10:09:08.870963    3801 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0307 10:09:09.008845    3801 main.go:141] libmachine: Creating SSH key...
	I0307 10:09:09.061744    3801 main.go:141] libmachine: Creating Disk image...
	I0307 10:09:09.061753    3801 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:09:09.061925    3801 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/test-preload-938000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/test-preload-938000/disk.qcow2
	I0307 10:09:09.073875    3801 main.go:141] libmachine: STDOUT: 
	I0307 10:09:09.073902    3801 main.go:141] libmachine: STDERR: 
	I0307 10:09:09.073966    3801 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/test-preload-938000/disk.qcow2 +20000M
	I0307 10:09:09.084840    3801 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:09:09.084858    3801 main.go:141] libmachine: STDERR: 
	I0307 10:09:09.084879    3801 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/test-preload-938000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/test-preload-938000/disk.qcow2
	I0307 10:09:09.084882    3801 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:09:09.084921    3801 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/test-preload-938000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/test-preload-938000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/test-preload-938000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:6a:d2:ef:b7:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/test-preload-938000/disk.qcow2
	I0307 10:09:09.086554    3801 main.go:141] libmachine: STDOUT: 
	I0307 10:09:09.086571    3801 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:09:09.086587    3801 client.go:171] duration metric: took 217.195792ms to LocalClient.Create
	I0307 10:09:10.777662    3801 cache.go:162] opening:  /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0307 10:09:10.781372    3801 cache.go:162] opening:  /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0307 10:09:10.782121    3801 cache.go:162] opening:  /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0307 10:09:11.022421    3801 cache.go:162] opening:  /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0307 10:09:11.086892    3801 start.go:128] duration metric: took 2.239357084s to createHost
	I0307 10:09:11.086947    3801 start.go:83] releasing machines lock for "test-preload-938000", held for 2.239736375s
	W0307 10:09:11.087017    3801 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:09:11.098892    3801 out.go:177] * Deleting "test-preload-938000" in qemu2 ...
	W0307 10:09:11.120801    3801 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:09:11.120835    3801 start.go:728] Will try again in 5 seconds ...
	I0307 10:09:11.158590    3801 cache.go:162] opening:  /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0307 10:09:11.184415    3801 cache.go:162] opening:  /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0307 10:09:11.207665    3801 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0307 10:09:11.207762    3801 cache.go:162] opening:  /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0307 10:09:11.292649    3801 cache.go:157] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0307 10:09:11.292696    3801 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.445678292s
	I0307 10:09:11.292771    3801 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0307 10:09:11.942144    3801 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0307 10:09:11.942233    3801 cache.go:162] opening:  /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0307 10:09:12.412520    3801 cache.go:157] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0307 10:09:12.412562    3801 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.565423416s
	I0307 10:09:12.412586    3801 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0307 10:09:13.117396    3801 cache.go:157] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0307 10:09:13.117453    3801 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 4.270315416s
	I0307 10:09:13.117497    3801 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0307 10:09:13.638491    3801 cache.go:157] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0307 10:09:13.638544    3801 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.791661291s
	I0307 10:09:13.638571    3801 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0307 10:09:14.158388    3801 cache.go:157] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0307 10:09:14.158429    3801 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.311571625s
	I0307 10:09:14.158447    3801 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0307 10:09:14.694804    3801 cache.go:157] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0307 10:09:14.694857    3801 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.84778975s
	I0307 10:09:14.694896    3801 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0307 10:09:16.121239    3801 start.go:360] acquireMachinesLock for test-preload-938000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:09:16.121639    3801 start.go:364] duration metric: took 315.291µs to acquireMachinesLock for "test-preload-938000"
	I0307 10:09:16.121783    3801 start.go:93] Provisioning new machine with config: &{Name:test-preload-938000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-938000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:09:16.122083    3801 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:09:16.132724    3801 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 10:09:16.135179    3801 cache.go:157] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0307 10:09:16.135340    3801 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 7.28855325s
	I0307 10:09:16.135384    3801 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0307 10:09:16.181826    3801 start.go:159] libmachine.API.Create for "test-preload-938000" (driver="qemu2")
	I0307 10:09:16.181869    3801 client.go:168] LocalClient.Create starting
	I0307 10:09:16.182006    3801 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:09:16.182077    3801 main.go:141] libmachine: Decoding PEM data...
	I0307 10:09:16.182092    3801 main.go:141] libmachine: Parsing certificate...
	I0307 10:09:16.182154    3801 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:09:16.182195    3801 main.go:141] libmachine: Decoding PEM data...
	I0307 10:09:16.182208    3801 main.go:141] libmachine: Parsing certificate...
	I0307 10:09:16.182700    3801 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:09:16.329033    3801 main.go:141] libmachine: Creating SSH key...
	I0307 10:09:16.497192    3801 main.go:141] libmachine: Creating Disk image...
	I0307 10:09:16.497201    3801 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:09:16.497405    3801 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/test-preload-938000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/test-preload-938000/disk.qcow2
	I0307 10:09:16.510534    3801 main.go:141] libmachine: STDOUT: 
	I0307 10:09:16.510558    3801 main.go:141] libmachine: STDERR: 
	I0307 10:09:16.510614    3801 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/test-preload-938000/disk.qcow2 +20000M
	I0307 10:09:16.521590    3801 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:09:16.521607    3801 main.go:141] libmachine: STDERR: 
	I0307 10:09:16.521615    3801 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/test-preload-938000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/test-preload-938000/disk.qcow2
	I0307 10:09:16.521620    3801 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:09:16.521658    3801 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/test-preload-938000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/test-preload-938000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/test-preload-938000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:8e:2b:c5:e1:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/test-preload-938000/disk.qcow2
	I0307 10:09:16.523495    3801 main.go:141] libmachine: STDOUT: 
	I0307 10:09:16.523510    3801 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:09:16.523523    3801 client.go:171] duration metric: took 341.660833ms to LocalClient.Create
	I0307 10:09:18.524035    3801 start.go:128] duration metric: took 2.401955375s to createHost
	I0307 10:09:18.524099    3801 start.go:83] releasing machines lock for "test-preload-938000", held for 2.402509375s
	W0307 10:09:18.524459    3801 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-938000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-938000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:09:18.540856    3801 out.go:177] 
	W0307 10:09:18.545088    3801 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:09:18.545117    3801 out.go:239] * 
	* 
	W0307 10:09:18.547672    3801 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:09:18.559024    3801 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-938000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-03-07 10:09:18.578863 -0800 PST m=+2422.399876001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-938000 -n test-preload-938000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-938000 -n test-preload-938000: exit status 7 (67.596083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-938000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-938000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-938000
--- FAIL: TestPreload (10.01s)

                                                
                                    
x
+
TestScheduledStopUnix (10.15s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-586000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-586000 --memory=2048 --driver=qemu2 : exit status 80 (9.97670225s)

                                                
                                                
-- stdout --
	* [scheduled-stop-586000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-586000" primary control-plane node in "scheduled-stop-586000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-586000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-586000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-586000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-586000" primary control-plane node in "scheduled-stop-586000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-586000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-586000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-07 10:09:28.725549 -0800 PST m=+2432.546897084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-586000 -n scheduled-stop-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-586000 -n scheduled-stop-586000: exit status 7 (68.677792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-586000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-586000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-586000
--- FAIL: TestScheduledStopUnix (10.15s)

                                                
                                    
x
+
TestSkaffold (16.55s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2463620545 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-073000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-073000 --memory=2600 --driver=qemu2 : exit status 80 (9.827422333s)

                                                
                                                
-- stdout --
	* [skaffold-073000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-073000" primary control-plane node in "skaffold-073000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-073000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-073000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-073000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-073000" primary control-plane node in "skaffold-073000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-073000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-073000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-03-07 10:09:45.275585 -0800 PST m=+2449.097479917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-073000 -n skaffold-073000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-073000 -n skaffold-073000: exit status 7 (65.632542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-073000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-073000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-073000
--- FAIL: TestSkaffold (16.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (627.17s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3966415148 start -p running-upgrade-064000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3966415148 start -p running-upgrade-064000 --memory=2200 --vm-driver=qemu2 : (1m22.02218525s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-064000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0307 10:12:16.872794    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-064000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m25.681933417s)

                                                
                                                
-- stdout --
	* [running-upgrade-064000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-064000" primary control-plane node in "running-upgrade-064000" cluster
	* Updating the running qemu2 "running-upgrade-064000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:11:52.662487    4223 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:11:52.662638    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:11:52.662642    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:11:52.662644    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:11:52.662768    4223 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:11:52.663696    4223 out.go:298] Setting JSON to false
	I0307 10:11:52.680940    4223 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4284,"bootTime":1709830828,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:11:52.681015    4223 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:11:52.685798    4223 out.go:177] * [running-upgrade-064000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:11:52.692704    4223 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:11:52.695813    4223 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:11:52.692736    4223 notify.go:220] Checking for updates...
	I0307 10:11:52.701807    4223 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:11:52.704839    4223 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:11:52.707886    4223 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:11:52.710712    4223 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:11:52.714066    4223 config.go:182] Loaded profile config "running-upgrade-064000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:11:52.717793    4223 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0307 10:11:52.720803    4223 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:11:52.724797    4223 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 10:11:52.730682    4223 start.go:297] selected driver: qemu2
	I0307 10:11:52.730687    4223 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50305 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 10:11:52.730734    4223 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:11:52.733390    4223 cni.go:84] Creating CNI manager for ""
	I0307 10:11:52.733408    4223 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:11:52.733447    4223 start.go:340] cluster config:
	{Name:running-upgrade-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50305 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 10:11:52.733509    4223 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:11:52.740801    4223 out.go:177] * Starting "running-upgrade-064000" primary control-plane node in "running-upgrade-064000" cluster
	I0307 10:11:52.744813    4223 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0307 10:11:52.744831    4223 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0307 10:11:52.744840    4223 cache.go:56] Caching tarball of preloaded images
	I0307 10:11:52.744892    4223 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:11:52.744903    4223 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0307 10:11:52.744951    4223 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/config.json ...
	I0307 10:11:52.745427    4223 start.go:360] acquireMachinesLock for running-upgrade-064000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:11:52.745459    4223 start.go:364] duration metric: took 25.666µs to acquireMachinesLock for "running-upgrade-064000"
	I0307 10:11:52.745467    4223 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:11:52.745471    4223 fix.go:54] fixHost starting: 
	I0307 10:11:52.746150    4223 fix.go:112] recreateIfNeeded on running-upgrade-064000: state=Running err=<nil>
	W0307 10:11:52.746161    4223 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:11:52.750875    4223 out.go:177] * Updating the running qemu2 "running-upgrade-064000" VM ...
	I0307 10:11:52.758743    4223 machine.go:94] provisionDockerMachine start ...
	I0307 10:11:52.758778    4223 main.go:141] libmachine: Using SSH client type: native
	I0307 10:11:52.758894    4223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f69a30] 0x100f6c290 <nil>  [] 0s} localhost 50273 <nil> <nil>}
	I0307 10:11:52.758900    4223 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 10:11:52.818837    4223 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-064000
	
	I0307 10:11:52.818852    4223 buildroot.go:166] provisioning hostname "running-upgrade-064000"
	I0307 10:11:52.818896    4223 main.go:141] libmachine: Using SSH client type: native
	I0307 10:11:52.819003    4223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f69a30] 0x100f6c290 <nil>  [] 0s} localhost 50273 <nil> <nil>}
	I0307 10:11:52.819009    4223 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-064000 && echo "running-upgrade-064000" | sudo tee /etc/hostname
	I0307 10:11:52.879212    4223 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-064000
	
	I0307 10:11:52.879258    4223 main.go:141] libmachine: Using SSH client type: native
	I0307 10:11:52.879355    4223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f69a30] 0x100f6c290 <nil>  [] 0s} localhost 50273 <nil> <nil>}
	I0307 10:11:52.879362    4223 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-064000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-064000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-064000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 10:11:52.939971    4223 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 10:11:52.939984    4223 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18241-1349/.minikube CaCertPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18241-1349/.minikube}
	I0307 10:11:52.939996    4223 buildroot.go:174] setting up certificates
	I0307 10:11:52.940002    4223 provision.go:84] configureAuth start
	I0307 10:11:52.940006    4223 provision.go:143] copyHostCerts
	I0307 10:11:52.940072    4223 exec_runner.go:144] found /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.pem, removing ...
	I0307 10:11:52.940079    4223 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.pem
	I0307 10:11:52.940188    4223 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.pem (1078 bytes)
	I0307 10:11:52.940365    4223 exec_runner.go:144] found /Users/jenkins/minikube-integration/18241-1349/.minikube/cert.pem, removing ...
	I0307 10:11:52.940368    4223 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18241-1349/.minikube/cert.pem
	I0307 10:11:52.940413    4223 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18241-1349/.minikube/cert.pem (1123 bytes)
	I0307 10:11:52.940520    4223 exec_runner.go:144] found /Users/jenkins/minikube-integration/18241-1349/.minikube/key.pem, removing ...
	I0307 10:11:52.940524    4223 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18241-1349/.minikube/key.pem
	I0307 10:11:52.940566    4223 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18241-1349/.minikube/key.pem (1679 bytes)
	I0307 10:11:52.940662    4223 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-064000 san=[127.0.0.1 localhost minikube running-upgrade-064000]
	I0307 10:11:52.978172    4223 provision.go:177] copyRemoteCerts
	I0307 10:11:52.978201    4223 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 10:11:52.978208    4223 sshutil.go:53] new ssh client: &{IP:localhost Port:50273 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/running-upgrade-064000/id_rsa Username:docker}
	I0307 10:11:53.013885    4223 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 10:11:53.020973    4223 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0307 10:11:53.027401    4223 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0307 10:11:53.034741    4223 provision.go:87] duration metric: took 94.737958ms to configureAuth
	I0307 10:11:53.034752    4223 buildroot.go:189] setting minikube options for container-runtime
	I0307 10:11:53.034857    4223 config.go:182] Loaded profile config "running-upgrade-064000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:11:53.034886    4223 main.go:141] libmachine: Using SSH client type: native
	I0307 10:11:53.034969    4223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f69a30] 0x100f6c290 <nil>  [] 0s} localhost 50273 <nil> <nil>}
	I0307 10:11:53.034975    4223 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 10:11:53.096623    4223 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 10:11:53.096635    4223 buildroot.go:70] root file system type: tmpfs
	I0307 10:11:53.096685    4223 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 10:11:53.096738    4223 main.go:141] libmachine: Using SSH client type: native
	I0307 10:11:53.096843    4223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f69a30] 0x100f6c290 <nil>  [] 0s} localhost 50273 <nil> <nil>}
	I0307 10:11:53.096878    4223 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 10:11:53.160648    4223 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 10:11:53.160694    4223 main.go:141] libmachine: Using SSH client type: native
	I0307 10:11:53.160815    4223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f69a30] 0x100f6c290 <nil>  [] 0s} localhost 50273 <nil> <nil>}
	I0307 10:11:53.160826    4223 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 10:11:53.221492    4223 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 10:11:53.221502    4223 machine.go:97] duration metric: took 462.767791ms to provisionDockerMachine
	I0307 10:11:53.221507    4223 start.go:293] postStartSetup for "running-upgrade-064000" (driver="qemu2")
	I0307 10:11:53.221514    4223 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 10:11:53.221562    4223 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 10:11:53.221572    4223 sshutil.go:53] new ssh client: &{IP:localhost Port:50273 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/running-upgrade-064000/id_rsa Username:docker}
	I0307 10:11:53.253931    4223 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 10:11:53.255114    4223 info.go:137] Remote host: Buildroot 2021.02.12
	I0307 10:11:53.255122    4223 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18241-1349/.minikube/addons for local assets ...
	I0307 10:11:53.255197    4223 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18241-1349/.minikube/files for local assets ...
	I0307 10:11:53.255314    4223 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18241-1349/.minikube/files/etc/ssl/certs/17812.pem -> 17812.pem in /etc/ssl/certs
	I0307 10:11:53.255435    4223 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 10:11:53.258641    4223 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/files/etc/ssl/certs/17812.pem --> /etc/ssl/certs/17812.pem (1708 bytes)
	I0307 10:11:53.265587    4223 start.go:296] duration metric: took 44.075917ms for postStartSetup
	I0307 10:11:53.265603    4223 fix.go:56] duration metric: took 520.149583ms for fixHost
	I0307 10:11:53.265639    4223 main.go:141] libmachine: Using SSH client type: native
	I0307 10:11:53.265746    4223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f69a30] 0x100f6c290 <nil>  [] 0s} localhost 50273 <nil> <nil>}
	I0307 10:11:53.265751    4223 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0307 10:11:53.328587    4223 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709835113.396245224
	
	I0307 10:11:53.328600    4223 fix.go:216] guest clock: 1709835113.396245224
	I0307 10:11:53.328604    4223 fix.go:229] Guest: 2024-03-07 10:11:53.396245224 -0800 PST Remote: 2024-03-07 10:11:53.265604 -0800 PST m=+0.625316126 (delta=130.641224ms)
	I0307 10:11:53.328617    4223 fix.go:200] guest clock delta is within tolerance: 130.641224ms
	I0307 10:11:53.328622    4223 start.go:83] releasing machines lock for "running-upgrade-064000", held for 583.178458ms
	I0307 10:11:53.328704    4223 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 10:11:53.328705    4223 ssh_runner.go:195] Run: cat /version.json
	I0307 10:11:53.328727    4223 sshutil.go:53] new ssh client: &{IP:localhost Port:50273 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/running-upgrade-064000/id_rsa Username:docker}
	I0307 10:11:53.328727    4223 sshutil.go:53] new ssh client: &{IP:localhost Port:50273 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/running-upgrade-064000/id_rsa Username:docker}
	W0307 10:11:53.329276    4223 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50273: connect: connection refused
	I0307 10:11:53.329296    4223 retry.go:31] will retry after 277.926733ms: dial tcp [::1]:50273: connect: connection refused
	W0307 10:11:53.650489    4223 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0307 10:11:53.650618    4223 ssh_runner.go:195] Run: systemctl --version
	I0307 10:11:53.653696    4223 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 10:11:53.656115    4223 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 10:11:53.656170    4223 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0307 10:11:53.660523    4223 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0307 10:11:53.666652    4223 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 10:11:53.666665    4223 start.go:494] detecting cgroup driver to use...
	I0307 10:11:53.666794    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:11:53.673184    4223 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0307 10:11:53.676818    4223 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 10:11:53.679973    4223 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 10:11:53.680000    4223 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 10:11:53.683207    4223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:11:53.686580    4223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 10:11:53.689538    4223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:11:53.692446    4223 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 10:11:53.695296    4223 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 10:11:53.698035    4223 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 10:11:53.700890    4223 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 10:11:53.703766    4223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:11:53.800871    4223 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 10:11:53.808826    4223 start.go:494] detecting cgroup driver to use...
	I0307 10:11:53.808907    4223 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 10:11:53.814095    4223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:11:53.819401    4223 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 10:11:53.824647    4223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:11:53.829284    4223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:11:53.833833    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:11:53.839324    4223 ssh_runner.go:195] Run: which cri-dockerd
	I0307 10:11:53.840718    4223 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 10:11:53.843193    4223 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 10:11:53.848533    4223 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 10:11:53.944325    4223 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 10:11:54.032908    4223 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 10:11:54.032987    4223 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 10:11:54.038103    4223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:11:54.130038    4223 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 10:11:55.516494    4223 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.386487042s)
	I0307 10:11:55.516554    4223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 10:11:55.521010    4223 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0307 10:11:55.527201    4223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 10:11:55.532556    4223 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 10:11:55.630235    4223 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 10:11:55.709636    4223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:11:55.795853    4223 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 10:11:55.802115    4223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 10:11:55.806356    4223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:11:55.877269    4223 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 10:11:55.918542    4223 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 10:11:55.918634    4223 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 10:11:55.920946    4223 start.go:562] Will wait 60s for crictl version
	I0307 10:11:55.920996    4223 ssh_runner.go:195] Run: which crictl
	I0307 10:11:55.922329    4223 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 10:11:55.933992    4223 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0307 10:11:55.934056    4223 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 10:11:55.949194    4223 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 10:11:55.972904    4223 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0307 10:11:55.973013    4223 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0307 10:11:55.974424    4223 kubeadm.go:877] updating cluster {Name:running-upgrade-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50305 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0307 10:11:55.974464    4223 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0307 10:11:55.974504    4223 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 10:11:55.984857    4223 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 10:11:55.984865    4223 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0307 10:11:55.984911    4223 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 10:11:55.987851    4223 ssh_runner.go:195] Run: which lz4
	I0307 10:11:55.989196    4223 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0307 10:11:55.990444    4223 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0307 10:11:55.990456    4223 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0307 10:11:56.763149    4223 docker.go:649] duration metric: took 774.008667ms to copy over tarball
	I0307 10:11:56.763209    4223 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0307 10:11:57.832482    4223 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.069294917s)
	I0307 10:11:57.832496    4223 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0307 10:11:57.847935    4223 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 10:11:57.851055    4223 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0307 10:11:57.856101    4223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:11:57.926480    4223 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 10:11:58.164253    4223 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 10:11:58.175285    4223 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 10:11:58.175297    4223 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0307 10:11:58.175302    4223 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0307 10:11:58.181230    4223 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0307 10:11:58.181282    4223 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 10:11:58.182008    4223 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 10:11:58.182066    4223 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 10:11:58.182133    4223 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:11:58.182191    4223 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0307 10:11:58.182237    4223 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 10:11:58.182296    4223 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 10:11:58.189767    4223 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 10:11:58.189844    4223 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 10:11:58.189898    4223 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 10:11:58.189970    4223 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 10:11:58.190240    4223 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0307 10:11:58.190585    4223 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 10:11:58.190582    4223 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0307 10:11:58.190698    4223 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:12:00.340500    4223 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 10:12:00.378793    4223 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0307 10:12:00.378847    4223 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 10:12:00.378947    4223 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 10:12:00.389512    4223 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0307 10:12:00.405333    4223 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0307 10:12:00.412423    4223 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0307 10:12:00.412443    4223 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 10:12:00.412500    4223 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	W0307 10:12:00.419902    4223 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0307 10:12:00.420040    4223 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0307 10:12:00.424606    4223 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0307 10:12:00.428142    4223 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0307 10:12:00.440220    4223 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0307 10:12:00.440242    4223 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 10:12:00.440304    4223 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0307 10:12:00.441656    4223 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0307 10:12:00.441668    4223 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0307 10:12:00.441697    4223 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0307 10:12:00.452722    4223 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0307 10:12:00.456274    4223 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0307 10:12:00.456374    4223 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0307 10:12:00.456606    4223 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0307 10:12:00.456653    4223 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0307 10:12:00.457955    4223 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0307 10:12:00.459531    4223 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0307 10:12:00.466503    4223 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0307 10:12:00.466503    4223 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0307 10:12:00.466530    4223 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0307 10:12:00.466534    4223 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0307 10:12:00.466581    4223 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0307 10:12:00.466596    4223 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 10:12:00.466640    4223 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0307 10:12:00.484091    4223 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0307 10:12:00.484110    4223 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0307 10:12:00.484168    4223 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0307 10:12:00.498196    4223 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0307 10:12:00.498194    4223 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0307 10:12:00.498226    4223 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 10:12:00.498278    4223 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0307 10:12:00.514783    4223 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0307 10:12:00.514815    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0307 10:12:00.515642    4223 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0307 10:12:00.537215    4223 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0307 10:12:00.561747    4223 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0307 10:12:00.561769    4223 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0307 10:12:00.561775    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0307 10:12:00.602433    4223 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0307 10:12:01.062721    4223 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0307 10:12:01.063262    4223 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:12:01.103390    4223 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0307 10:12:01.103437    4223 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:12:01.103536    4223 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:12:02.056947    4223 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0307 10:12:02.057425    4223 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0307 10:12:02.062602    4223 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0307 10:12:02.062689    4223 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0307 10:12:02.116423    4223 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0307 10:12:02.116439    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0307 10:12:02.358638    4223 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0307 10:12:02.358686    4223 cache_images.go:92] duration metric: took 4.183515s to LoadCachedImages
	W0307 10:12:02.358729    4223 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0307 10:12:02.358735    4223 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0307 10:12:02.358786    4223 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-064000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 10:12:02.358854    4223 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 10:12:02.372318    4223 cni.go:84] Creating CNI manager for ""
	I0307 10:12:02.372328    4223 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:12:02.372332    4223 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 10:12:02.372341    4223 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-064000 NodeName:running-upgrade-064000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 10:12:02.372422    4223 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-064000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 10:12:02.372485    4223 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0307 10:12:02.375820    4223 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 10:12:02.375849    4223 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 10:12:02.378597    4223 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0307 10:12:02.383469    4223 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 10:12:02.388556    4223 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0307 10:12:02.394278    4223 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0307 10:12:02.395554    4223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:12:02.482656    4223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 10:12:02.487347    4223 certs.go:68] Setting up /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000 for IP: 10.0.2.15
	I0307 10:12:02.487353    4223 certs.go:194] generating shared ca certs ...
	I0307 10:12:02.487363    4223 certs.go:226] acquiring lock for ca certs: {Name:mkc8d76d77d4efc8795fd6159d984855be90a666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:12:02.487554    4223 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.key
	I0307 10:12:02.487602    4223 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/proxy-client-ca.key
	I0307 10:12:02.487610    4223 certs.go:256] generating profile certs ...
	I0307 10:12:02.487666    4223 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/client.key
	I0307 10:12:02.487677    4223 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/apiserver.key.49b55af9
	I0307 10:12:02.487685    4223 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/apiserver.crt.49b55af9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0307 10:12:02.657757    4223 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/apiserver.crt.49b55af9 ...
	I0307 10:12:02.657768    4223 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/apiserver.crt.49b55af9: {Name:mk617cbcb8554ee5bf1bfa124e78c07f865702c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:12:02.658123    4223 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/apiserver.key.49b55af9 ...
	I0307 10:12:02.658128    4223 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/apiserver.key.49b55af9: {Name:mk866b434e5ffa38cde43b475def8128ff5ab8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:12:02.658255    4223 certs.go:381] copying /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/apiserver.crt.49b55af9 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/apiserver.crt
	I0307 10:12:02.658397    4223 certs.go:385] copying /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/apiserver.key.49b55af9 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/apiserver.key
	I0307 10:12:02.658554    4223 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/proxy-client.key
	I0307 10:12:02.658692    4223 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/1781.pem (1338 bytes)
	W0307 10:12:02.658724    4223 certs.go:480] ignoring /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/1781_empty.pem, impossibly tiny 0 bytes
	I0307 10:12:02.658730    4223 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca-key.pem (1675 bytes)
	I0307 10:12:02.658747    4223 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem (1078 bytes)
	I0307 10:12:02.658764    4223 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem (1123 bytes)
	I0307 10:12:02.658780    4223 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/key.pem (1679 bytes)
	I0307 10:12:02.658837    4223 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/files/etc/ssl/certs/17812.pem (1708 bytes)
	I0307 10:12:02.659156    4223 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 10:12:02.666770    4223 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 10:12:02.674214    4223 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 10:12:02.680826    4223 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 10:12:02.687654    4223 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0307 10:12:02.694254    4223 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 10:12:02.701091    4223 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 10:12:02.707992    4223 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0307 10:12:02.714948    4223 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/1781.pem --> /usr/share/ca-certificates/1781.pem (1338 bytes)
	I0307 10:12:02.722097    4223 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/files/etc/ssl/certs/17812.pem --> /usr/share/ca-certificates/17812.pem (1708 bytes)
	I0307 10:12:02.729221    4223 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 10:12:02.735596    4223 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 10:12:02.740476    4223 ssh_runner.go:195] Run: openssl version
	I0307 10:12:02.742304    4223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 10:12:02.745856    4223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:12:02.747319    4223 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 17:31 /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:12:02.747337    4223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:12:02.749154    4223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 10:12:02.751895    4223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1781.pem && ln -fs /usr/share/ca-certificates/1781.pem /etc/ssl/certs/1781.pem"
	I0307 10:12:02.754992    4223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1781.pem
	I0307 10:12:02.756547    4223 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 17:37 /usr/share/ca-certificates/1781.pem
	I0307 10:12:02.756571    4223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1781.pem
	I0307 10:12:02.758342    4223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1781.pem /etc/ssl/certs/51391683.0"
	I0307 10:12:02.761329    4223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17812.pem && ln -fs /usr/share/ca-certificates/17812.pem /etc/ssl/certs/17812.pem"
	I0307 10:12:02.764295    4223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17812.pem
	I0307 10:12:02.765705    4223 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 17:37 /usr/share/ca-certificates/17812.pem
	I0307 10:12:02.765734    4223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17812.pem
	I0307 10:12:02.767538    4223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17812.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 10:12:02.770352    4223 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 10:12:02.771919    4223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0307 10:12:02.773564    4223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0307 10:12:02.775387    4223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0307 10:12:02.777259    4223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0307 10:12:02.779720    4223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0307 10:12:02.781424    4223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0307 10:12:02.783171    4223 kubeadm.go:391] StartCluster: {Name:running-upgrade-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50305 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 10:12:02.783236    4223 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 10:12:02.793242    4223 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0307 10:12:02.796569    4223 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0307 10:12:02.796575    4223 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0307 10:12:02.796578    4223 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0307 10:12:02.796603    4223 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0307 10:12:02.799610    4223 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:12:02.799836    4223 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-064000" does not appear in /Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:12:02.799881    4223 kubeconfig.go:62] /Users/jenkins/minikube-integration/18241-1349/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-064000" cluster setting kubeconfig missing "running-upgrade-064000" context setting]
	I0307 10:12:02.800021    4223 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/kubeconfig: {Name:mkeef9e7922e618c2ac8219607b646aeaf5f61cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:12:02.800434    4223 kapi.go:59] client config for running-upgrade-064000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/client.key", CAFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10225f6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 10:12:02.800738    4223 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0307 10:12:02.803556    4223 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-064000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0307 10:12:02.803561    4223 kubeadm.go:1153] stopping kube-system containers ...
	I0307 10:12:02.803598    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 10:12:02.814383    4223 docker.go:483] Stopping containers: [a18cd386d87a 75601c615cf1 1318991ecd60 a647ec10ae86 60e5f0621b8c 663741aa7a37 9378fea4f127 78c1d8b7fef3 3b5187de19fa f75b1d96ecb4 178fabc62816 30a49a9d0f71]
	I0307 10:12:02.814451    4223 ssh_runner.go:195] Run: docker stop a18cd386d87a 75601c615cf1 1318991ecd60 a647ec10ae86 60e5f0621b8c 663741aa7a37 9378fea4f127 78c1d8b7fef3 3b5187de19fa f75b1d96ecb4 178fabc62816 30a49a9d0f71
	I0307 10:12:02.825896    4223 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0307 10:12:02.929568    4223 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 10:12:02.934027    4223 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Mar  7 18:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Mar  7 18:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Mar  7 18:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Mar  7 18:11 /etc/kubernetes/scheduler.conf
	
	I0307 10:12:02.934073    4223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/admin.conf
	I0307 10:12:02.938444    4223 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:12:02.938480    4223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 10:12:02.941899    4223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/kubelet.conf
	I0307 10:12:02.945399    4223 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:12:02.945425    4223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 10:12:02.948902    4223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/controller-manager.conf
	I0307 10:12:02.952015    4223 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:12:02.952039    4223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 10:12:02.954840    4223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/scheduler.conf
	I0307 10:12:02.957389    4223 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:12:02.957411    4223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 10:12:02.960517    4223 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 10:12:02.963298    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:12:02.983703    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:12:03.315048    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:12:03.525271    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:12:03.551226    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:12:03.570882    4223 api_server.go:52] waiting for apiserver process to appear ...
	I0307 10:12:03.570961    4223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:12:04.072991    4223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:12:04.573002    4223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:12:04.577644    4223 api_server.go:72] duration metric: took 1.006795416s to wait for apiserver process to appear ...
	I0307 10:12:04.577656    4223 api_server.go:88] waiting for apiserver healthz status ...
	I0307 10:12:04.577665    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:12:09.579690    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:12:09.579739    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:12:14.580037    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:12:14.580105    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:12:19.580636    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:12:19.580661    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:12:24.581252    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:12:24.581326    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:12:29.582580    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:12:29.582664    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:12:34.584163    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:12:34.584296    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:12:39.586432    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:12:39.586519    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:12:44.589033    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:12:44.589116    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:12:49.591803    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:12:49.591877    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:12:54.593835    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:12:54.593878    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:12:59.596161    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:12:59.596237    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:13:04.598540    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:13:04.598786    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:13:04.621290    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:13:04.621409    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:13:04.637066    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:13:04.637156    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:13:04.653771    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:13:04.653853    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:13:04.668151    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:13:04.668235    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:13:04.678573    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:13:04.678656    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:13:04.688938    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:13:04.689014    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:13:04.699009    4223 logs.go:276] 0 containers: []
	W0307 10:13:04.699025    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:13:04.699095    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:13:04.716777    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:13:04.716808    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:13:04.716813    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:13:04.727958    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:13:04.727969    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:13:04.739843    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:13:04.739854    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:13:04.756235    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:13:04.756248    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:13:04.775597    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:13:04.775612    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:13:04.811645    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:13:04.811654    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:13:04.831224    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:13:04.831234    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:13:04.938423    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:13:04.938441    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:13:04.951613    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:13:04.951625    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:13:04.963392    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:13:04.963404    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:13:04.989945    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:13:04.989957    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:13:05.002931    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:13:05.002951    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:13:05.007202    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:13:05.007210    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:13:05.021710    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:13:05.021721    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:13:05.033415    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:13:05.033426    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:13:05.044487    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:13:05.044498    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:13:05.055570    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:13:05.055581    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:13:07.571232    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:13:12.573864    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:13:12.574266    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:13:12.618806    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:13:12.618961    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:13:12.636374    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:13:12.636463    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:13:12.653201    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:13:12.653282    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:13:12.664130    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:13:12.664748    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:13:12.675473    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:13:12.675554    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:13:12.685878    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:13:12.685945    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:13:12.696193    4223 logs.go:276] 0 containers: []
	W0307 10:13:12.696204    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:13:12.696254    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:13:12.706956    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:13:12.706976    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:13:12.706982    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:13:12.742746    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:13:12.742760    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:13:12.757108    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:13:12.757118    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:13:12.768996    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:13:12.769011    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:13:12.780892    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:13:12.780903    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:13:12.818329    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:13:12.818337    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:13:12.837159    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:13:12.837170    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:13:12.851207    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:13:12.851217    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:13:12.861980    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:13:12.862001    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:13:12.877409    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:13:12.877420    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:13:12.888711    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:13:12.888721    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:13:12.903799    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:13:12.903810    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:13:12.917677    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:13:12.917688    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:13:12.935214    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:13:12.935229    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:13:12.949911    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:13:12.949921    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:13:12.961202    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:13:12.961213    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:13:12.986219    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:13:12.986227    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:13:15.491092    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:13:20.493485    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:13:20.493578    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:13:20.504627    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:13:20.504679    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:13:20.515122    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:13:20.515189    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:13:20.526072    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:13:20.526149    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:13:20.536612    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:13:20.536687    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:13:20.546738    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:13:20.546811    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:13:20.557489    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:13:20.557557    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:13:20.567605    4223 logs.go:276] 0 containers: []
	W0307 10:13:20.567617    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:13:20.567670    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:13:20.578115    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:13:20.578137    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:13:20.578142    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:13:20.615890    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:13:20.615902    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:13:20.634528    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:13:20.634538    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:13:20.645693    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:13:20.645703    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:13:20.656873    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:13:20.656888    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:13:20.668951    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:13:20.668965    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:13:20.673896    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:13:20.673902    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:13:20.710732    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:13:20.710741    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:13:20.726771    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:13:20.726781    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:13:20.738571    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:13:20.738583    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:13:20.749986    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:13:20.749998    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:13:20.761908    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:13:20.761918    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:13:20.775117    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:13:20.775129    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:13:20.790410    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:13:20.790422    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:13:20.804013    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:13:20.804026    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:13:20.820813    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:13:20.820823    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:13:20.832292    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:13:20.832302    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:13:23.360504    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:13:28.363283    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:13:28.363647    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:13:28.401367    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:13:28.401494    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:13:28.421158    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:13:28.421265    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:13:28.435333    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:13:28.435408    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:13:28.447224    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:13:28.447308    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:13:28.458133    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:13:28.458204    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:13:28.472602    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:13:28.472672    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:13:28.483164    4223 logs.go:276] 0 containers: []
	W0307 10:13:28.483182    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:13:28.483233    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:13:28.496331    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:13:28.496349    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:13:28.496355    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:13:28.515154    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:13:28.515164    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:13:28.540659    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:13:28.540669    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:13:28.580083    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:13:28.580096    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:13:28.584728    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:13:28.584734    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:13:28.595853    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:13:28.595865    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:13:28.611883    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:13:28.611895    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:13:28.634843    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:13:28.634857    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:13:28.649409    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:13:28.649422    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:13:28.661077    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:13:28.661088    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:13:28.672866    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:13:28.672879    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:13:28.690316    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:13:28.690327    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:13:28.702213    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:13:28.702224    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:13:28.716051    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:13:28.716064    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:13:28.751646    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:13:28.751656    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:13:28.765515    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:13:28.765527    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:13:28.776895    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:13:28.776906    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:13:31.291024    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:13:36.293631    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:13:36.294038    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:13:36.333176    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:13:36.333306    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:13:36.354619    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:13:36.354728    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:13:36.374626    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:13:36.374706    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:13:36.386382    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:13:36.386447    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:13:36.397030    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:13:36.397091    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:13:36.407810    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:13:36.407885    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:13:36.418390    4223 logs.go:276] 0 containers: []
	W0307 10:13:36.418401    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:13:36.418457    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:13:36.429034    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:13:36.429051    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:13:36.429059    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:13:36.466609    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:13:36.466620    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:13:36.480447    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:13:36.480460    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:13:36.499310    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:13:36.499320    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:13:36.513467    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:13:36.513479    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:13:36.525288    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:13:36.525297    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:13:36.539569    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:13:36.539583    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:13:36.554726    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:13:36.554738    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:13:36.566052    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:13:36.566065    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:13:36.570170    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:13:36.570178    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:13:36.587465    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:13:36.587478    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:13:36.612468    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:13:36.612477    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:13:36.623521    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:13:36.623533    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:13:36.665651    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:13:36.665663    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:13:36.677514    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:13:36.677527    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:13:36.689206    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:13:36.689220    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:13:36.701365    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:13:36.701377    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:13:39.213919    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:13:44.216001    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:13:44.216344    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:13:44.250426    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:13:44.250601    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:13:44.270388    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:13:44.270486    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:13:44.289785    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:13:44.289857    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:13:44.306347    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:13:44.306529    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:13:44.318379    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:13:44.318430    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:13:44.330147    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:13:44.330211    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:13:44.342056    4223 logs.go:276] 0 containers: []
	W0307 10:13:44.342069    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:13:44.342137    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:13:44.354184    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:13:44.354200    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:13:44.354206    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:13:44.396001    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:13:44.396012    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:13:44.410359    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:13:44.410368    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:13:44.427615    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:13:44.427628    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:13:44.452200    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:13:44.452208    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:13:44.488754    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:13:44.488764    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:13:44.492814    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:13:44.492822    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:13:44.504496    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:13:44.504507    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:13:44.516124    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:13:44.516134    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:13:44.527355    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:13:44.527366    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:13:44.546395    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:13:44.546405    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:13:44.561151    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:13:44.561161    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:13:44.572876    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:13:44.572885    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:13:44.586406    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:13:44.586416    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:13:44.603509    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:13:44.603519    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:13:44.614512    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:13:44.614523    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:13:44.626037    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:13:44.626047    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:13:47.137132    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:13:52.138866    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:13:52.139293    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:13:52.179073    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:13:52.179213    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:13:52.200465    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:13:52.200558    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:13:52.215831    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:13:52.215897    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:13:52.228138    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:13:52.228213    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:13:52.238561    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:13:52.238629    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:13:52.249476    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:13:52.249541    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:13:52.264668    4223 logs.go:276] 0 containers: []
	W0307 10:13:52.264680    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:13:52.264737    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:13:52.275442    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:13:52.275463    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:13:52.275469    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:13:52.279720    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:13:52.279726    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:13:52.293650    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:13:52.293663    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:13:52.304838    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:13:52.304848    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:13:52.316914    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:13:52.316925    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:13:52.352103    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:13:52.352116    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:13:52.371402    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:13:52.371413    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:13:52.382641    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:13:52.382651    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:13:52.394312    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:13:52.394322    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:13:52.431324    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:13:52.431330    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:13:52.445671    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:13:52.445682    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:13:52.460950    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:13:52.460961    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:13:52.472229    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:13:52.472239    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:13:52.489230    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:13:52.489245    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:13:52.503142    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:13:52.503157    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:13:52.514218    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:13:52.514229    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:13:52.526593    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:13:52.526604    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:13:55.052221    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:14:00.054670    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:14:00.054800    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:14:00.066654    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:14:00.066736    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:14:00.078300    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:14:00.078371    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:14:00.090868    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:14:00.090943    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:14:00.101305    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:14:00.101373    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:14:00.115453    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:14:00.115523    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:14:00.126309    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:14:00.126378    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:14:00.136620    4223 logs.go:276] 0 containers: []
	W0307 10:14:00.136630    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:14:00.136684    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:14:00.147396    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:14:00.147418    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:14:00.147423    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:14:00.158933    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:14:00.158944    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:14:00.170492    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:14:00.170506    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:14:00.181766    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:14:00.181778    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:14:00.197313    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:14:00.197323    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:14:00.215062    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:14:00.215075    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:14:00.227840    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:14:00.227853    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:14:00.265400    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:14:00.265411    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:14:00.279471    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:14:00.279481    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:14:00.291880    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:14:00.291890    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:14:00.309064    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:14:00.309075    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:14:00.322943    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:14:00.322957    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:14:00.335237    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:14:00.335248    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:14:00.346924    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:14:00.346937    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:14:00.372106    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:14:00.372115    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:14:00.376986    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:14:00.376993    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:14:00.414828    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:14:00.414839    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:14:02.936388    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:14:07.938476    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:14:07.938735    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:14:07.965232    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:14:07.965323    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:14:07.980455    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:14:07.980523    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:14:07.993412    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:14:07.993489    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:14:08.004122    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:14:08.004201    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:14:08.014931    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:14:08.014995    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:14:08.024917    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:14:08.025007    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:14:08.037980    4223 logs.go:276] 0 containers: []
	W0307 10:14:08.037992    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:14:08.038050    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:14:08.048932    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:14:08.048950    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:14:08.048955    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:14:08.064877    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:14:08.064889    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:14:08.075872    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:14:08.075884    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:14:08.088165    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:14:08.088175    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:14:08.102160    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:14:08.102171    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:14:08.115430    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:14:08.115440    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:14:08.133886    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:14:08.133897    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:14:08.146176    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:14:08.146186    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:14:08.181330    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:14:08.181343    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:14:08.204031    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:14:08.204043    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:14:08.239874    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:14:08.239883    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:14:08.251599    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:14:08.251610    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:14:08.288281    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:14:08.288291    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:14:08.299577    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:14:08.299591    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:14:08.314664    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:14:08.314676    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:14:08.332033    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:14:08.332044    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:14:08.355607    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:14:08.355614    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:14:10.862937    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:14:15.865360    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:14:15.865794    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:14:15.908371    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:14:15.908486    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:14:15.926244    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:14:15.926339    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:14:15.937489    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:14:15.937576    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:14:15.948158    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:14:15.948228    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:14:15.960015    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:14:15.960080    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:14:15.970483    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:14:15.970555    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:14:15.980839    4223 logs.go:276] 0 containers: []
	W0307 10:14:15.980850    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:14:15.980920    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:14:15.993721    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:14:15.993743    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:14:15.993750    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:14:16.005409    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:14:16.005420    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:14:16.026743    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:14:16.026757    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:14:16.046467    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:14:16.046481    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:14:16.058266    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:14:16.058278    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:14:16.071541    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:14:16.071552    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:14:16.097487    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:14:16.097497    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:14:16.114886    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:14:16.114901    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:14:16.151858    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:14:16.151869    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:14:16.194648    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:14:16.194660    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:14:16.210084    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:14:16.210095    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:14:16.222771    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:14:16.222782    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:14:16.238138    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:14:16.238153    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:14:16.250699    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:14:16.250709    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:14:16.277832    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:14:16.277843    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:14:16.292842    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:14:16.292854    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:14:16.297301    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:14:16.297306    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:14:18.816137    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:14:23.818454    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:14:23.818917    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:14:23.862078    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:14:23.862227    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:14:23.883905    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:14:23.884006    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:14:23.899001    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:14:23.899085    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:14:23.912230    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:14:23.912306    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:14:23.923410    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:14:23.923473    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:14:23.934142    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:14:23.934209    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:14:23.945202    4223 logs.go:276] 0 containers: []
	W0307 10:14:23.945215    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:14:23.945272    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:14:23.956543    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:14:23.956563    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:14:23.956570    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:14:23.961441    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:14:23.961451    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:14:23.975567    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:14:23.975579    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:14:23.987539    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:14:23.987551    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:14:24.012346    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:14:24.012354    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:14:24.031886    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:14:24.031900    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:14:24.046970    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:14:24.046984    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:14:24.062663    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:14:24.062674    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:14:24.074380    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:14:24.074392    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:14:24.111963    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:14:24.111973    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:14:24.146688    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:14:24.146698    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:14:24.161278    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:14:24.161289    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:14:24.181904    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:14:24.181915    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:14:24.193920    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:14:24.193929    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:14:24.208175    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:14:24.208187    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:14:24.220291    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:14:24.220303    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:14:24.236101    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:14:24.236112    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:14:26.748983    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:14:31.751061    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:14:31.751227    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:14:31.765048    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:14:31.765126    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:14:31.775819    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:14:31.775882    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:14:31.786729    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:14:31.786796    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:14:31.797069    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:14:31.797144    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:14:31.808058    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:14:31.808126    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:14:31.827729    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:14:31.827808    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:14:31.838519    4223 logs.go:276] 0 containers: []
	W0307 10:14:31.838531    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:14:31.838594    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:14:31.848581    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:14:31.848599    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:14:31.848605    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:14:31.853565    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:14:31.853574    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:14:31.864782    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:14:31.864794    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:14:31.876424    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:14:31.876435    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:14:31.889411    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:14:31.889425    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:14:31.905548    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:14:31.905558    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:14:31.932594    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:14:31.932607    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:14:31.944865    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:14:31.944878    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:14:31.979936    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:14:31.979950    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:14:31.999213    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:14:31.999226    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:14:32.019102    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:14:32.019117    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:14:32.033627    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:14:32.033640    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:14:32.049157    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:14:32.049171    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:14:32.066698    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:14:32.066709    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:14:32.104812    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:14:32.104827    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:14:32.119972    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:14:32.119983    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:14:32.131768    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:14:32.131781    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:14:34.645977    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:14:39.648131    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:14:39.648400    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:14:39.677276    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:14:39.677402    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:14:39.695872    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:14:39.695961    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:14:39.709689    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:14:39.709759    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:14:39.727009    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:14:39.727084    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:14:39.737073    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:14:39.737147    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:14:39.747701    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:14:39.747769    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:14:39.757469    4223 logs.go:276] 0 containers: []
	W0307 10:14:39.757482    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:14:39.757545    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:14:39.767913    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:14:39.767931    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:14:39.767937    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:14:39.783768    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:14:39.783779    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:14:39.798713    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:14:39.798725    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:14:39.836737    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:14:39.836755    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:14:39.872489    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:14:39.872501    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:14:39.884432    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:14:39.884444    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:14:39.896341    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:14:39.896355    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:14:39.920310    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:14:39.920320    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:14:39.931921    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:14:39.931931    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:14:39.948946    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:14:39.948957    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:14:39.963077    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:14:39.963090    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:14:39.974024    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:14:39.974036    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:14:39.985259    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:14:39.985267    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:14:39.989602    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:14:39.989611    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:14:40.008890    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:14:40.008904    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:14:40.020649    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:14:40.020663    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:14:40.035031    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:14:40.035043    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:14:42.548081    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:14:47.550497    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:14:47.550657    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:14:47.571762    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:14:47.571863    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:14:47.586862    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:14:47.586944    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:14:47.598129    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:14:47.598191    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:14:47.609121    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:14:47.609192    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:14:47.619508    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:14:47.619576    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:14:47.630193    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:14:47.630261    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:14:47.639997    4223 logs.go:276] 0 containers: []
	W0307 10:14:47.640009    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:14:47.640067    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:14:47.650482    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:14:47.650502    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:14:47.650507    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:14:47.664002    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:14:47.664012    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:14:47.677904    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:14:47.677913    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:14:47.695252    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:14:47.695264    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:14:47.719041    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:14:47.719048    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:14:47.738371    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:14:47.738382    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:14:47.752479    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:14:47.752489    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:14:47.767654    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:14:47.767668    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:14:47.771964    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:14:47.771973    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:14:47.806592    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:14:47.806605    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:14:47.818228    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:14:47.818241    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:14:47.829443    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:14:47.829456    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:14:47.841053    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:14:47.841068    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:14:47.878926    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:14:47.878935    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:14:47.889910    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:14:47.889921    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:14:47.901005    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:14:47.901018    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:14:47.921702    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:14:47.921712    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:14:50.439130    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:14:55.441643    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:14:55.441748    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:14:55.452679    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:14:55.452761    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:14:55.463475    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:14:55.463555    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:14:55.478713    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:14:55.478778    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:14:55.489743    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:14:55.489804    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:14:55.500247    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:14:55.500312    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:14:55.510824    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:14:55.510891    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:14:55.525096    4223 logs.go:276] 0 containers: []
	W0307 10:14:55.525108    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:14:55.525163    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:14:55.535849    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:14:55.535866    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:14:55.535872    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:14:55.547581    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:14:55.547593    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:14:55.562830    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:14:55.562841    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:14:55.576190    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:14:55.576202    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:14:55.580723    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:14:55.580730    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:14:55.594982    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:14:55.594994    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:14:55.606485    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:14:55.606496    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:14:55.627090    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:14:55.627100    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:14:55.665610    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:14:55.665617    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:14:55.685139    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:14:55.685149    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:14:55.696948    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:14:55.696960    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:14:55.708385    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:14:55.708396    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:14:55.731931    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:14:55.731939    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:14:55.743710    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:14:55.743723    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:14:55.757958    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:14:55.757969    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:14:55.775695    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:14:55.775707    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:14:55.788917    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:14:55.788927    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:14:58.327314    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:03.327677    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:03.328065    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:15:03.368845    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:15:03.368977    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:15:03.400622    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:15:03.400707    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:15:03.416248    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:15:03.416333    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:15:03.435993    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:15:03.436072    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:15:03.449121    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:15:03.449203    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:15:03.459637    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:15:03.459732    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:15:03.471032    4223 logs.go:276] 0 containers: []
	W0307 10:15:03.471047    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:15:03.471115    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:15:03.482778    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:15:03.482797    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:15:03.482803    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:15:03.500442    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:15:03.500453    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:15:03.536823    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:15:03.536834    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:15:03.541615    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:15:03.541622    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:15:03.553562    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:15:03.553576    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:15:03.595974    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:15:03.595987    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:15:03.610148    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:15:03.610161    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:15:03.622094    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:15:03.622105    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:15:03.633230    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:15:03.633241    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:15:03.648337    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:15:03.648349    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:15:03.665690    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:15:03.665701    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:15:03.678835    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:15:03.678848    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:15:03.690297    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:15:03.690310    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:15:03.702520    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:15:03.702530    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:15:03.726327    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:15:03.726336    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:15:03.737992    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:15:03.738003    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:15:03.752043    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:15:03.752054    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:15:06.271410    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:11.272173    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:11.272278    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:15:11.284749    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:15:11.284828    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:15:11.296190    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:15:11.296263    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:15:11.308463    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:15:11.308538    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:15:11.328344    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:15:11.328431    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:15:11.340696    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:15:11.340777    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:15:11.352556    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:15:11.352628    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:15:11.363924    4223 logs.go:276] 0 containers: []
	W0307 10:15:11.363935    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:15:11.363990    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:15:11.374811    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:15:11.374830    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:15:11.374836    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:15:11.417988    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:15:11.418006    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:15:11.432024    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:15:11.432037    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:15:11.449009    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:15:11.449022    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:15:11.461744    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:15:11.461757    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:15:11.476056    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:15:11.476076    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:15:11.496281    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:15:11.496311    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:15:11.538892    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:15:11.538915    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:15:11.543680    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:15:11.543693    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:15:11.563949    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:15:11.563969    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:15:11.579129    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:15:11.579145    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:15:11.597776    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:15:11.597788    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:15:11.630304    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:15:11.630327    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:15:11.661612    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:15:11.661629    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:15:11.681292    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:15:11.681308    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:15:11.720893    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:15:11.720912    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:15:11.743331    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:15:11.743346    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:15:14.269085    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:19.271169    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:19.271325    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:15:19.282484    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:15:19.282573    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:15:19.293441    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:15:19.293520    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:15:19.304121    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:15:19.304189    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:15:19.315104    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:15:19.315175    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:15:19.326188    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:15:19.326254    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:15:19.337123    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:15:19.337193    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:15:19.347660    4223 logs.go:276] 0 containers: []
	W0307 10:15:19.347673    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:15:19.347737    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:15:19.371539    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:15:19.371559    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:15:19.371564    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:15:19.386953    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:15:19.386965    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:15:19.401132    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:15:19.401143    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:15:19.412791    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:15:19.412819    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:15:19.425348    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:15:19.425362    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:15:19.445145    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:15:19.445161    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:15:19.465515    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:15:19.465530    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:15:19.487036    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:15:19.487048    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:15:19.504798    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:15:19.504811    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:15:19.541394    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:15:19.541405    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:15:19.553956    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:15:19.553970    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:15:19.565860    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:15:19.565872    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:15:19.592381    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:15:19.592392    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:15:19.631093    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:15:19.631103    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:15:19.635549    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:15:19.635557    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:15:19.649660    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:15:19.649672    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:15:19.667372    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:15:19.667384    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:15:22.182020    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:27.183716    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:27.184240    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:15:27.220542    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:15:27.220696    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:15:27.241664    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:15:27.241767    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:15:27.256351    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:15:27.256426    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:15:27.269197    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:15:27.269273    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:15:27.280082    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:15:27.280149    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:15:27.291909    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:15:27.291995    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:15:27.301989    4223 logs.go:276] 0 containers: []
	W0307 10:15:27.302001    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:15:27.302062    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:15:27.319984    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:15:27.320017    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:15:27.320023    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:15:27.333705    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:15:27.333723    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:15:27.350439    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:15:27.350453    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:15:27.362599    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:15:27.362611    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:15:27.374163    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:15:27.374176    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:15:27.378431    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:15:27.378437    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:15:27.397065    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:15:27.397076    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:15:27.414331    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:15:27.414341    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:15:27.425810    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:15:27.425821    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:15:27.440887    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:15:27.440897    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:15:27.454020    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:15:27.454031    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:15:27.468233    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:15:27.468244    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:15:27.483351    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:15:27.483363    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:15:27.500890    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:15:27.500902    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:15:27.525264    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:15:27.525275    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:15:27.537157    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:15:27.537171    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:15:27.574787    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:15:27.574816    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:15:30.122436    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:35.124553    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:35.124657    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:15:35.137383    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:15:35.137462    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:15:35.149552    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:15:35.149627    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:15:35.161260    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:15:35.161334    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:15:35.172384    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:15:35.172460    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:15:35.183396    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:15:35.183465    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:15:35.194247    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:15:35.194325    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:15:35.204485    4223 logs.go:276] 0 containers: []
	W0307 10:15:35.204497    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:15:35.204562    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:15:35.215364    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:15:35.215383    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:15:35.215390    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:15:35.234248    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:15:35.234258    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:15:35.250757    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:15:35.250768    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:15:35.269223    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:15:35.269234    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:15:35.280645    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:15:35.280657    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:15:35.292263    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:15:35.292275    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:15:35.315802    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:15:35.315813    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:15:35.320482    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:15:35.320490    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:15:35.358082    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:15:35.358096    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:15:35.380597    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:15:35.380611    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:15:35.395517    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:15:35.395528    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:15:35.412980    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:15:35.412991    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:15:35.424685    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:15:35.424698    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:15:35.462457    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:15:35.462468    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:15:35.482292    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:15:35.482305    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:15:35.499724    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:15:35.499734    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:15:35.511175    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:15:35.511188    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:15:38.027854    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:43.029997    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:43.030281    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:15:43.058342    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:15:43.058476    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:15:43.075094    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:15:43.075172    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:15:43.088013    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:15:43.088076    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:15:43.099757    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:15:43.099835    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:15:43.110340    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:15:43.110407    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:15:43.121301    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:15:43.121368    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:15:43.138258    4223 logs.go:276] 0 containers: []
	W0307 10:15:43.138271    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:15:43.138333    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:15:43.149093    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:15:43.149113    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:15:43.149119    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:15:43.184473    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:15:43.184486    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:15:43.206760    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:15:43.206768    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:15:43.218268    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:15:43.218280    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:15:43.230634    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:15:43.230646    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:15:43.242608    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:15:43.242619    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:15:43.260503    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:15:43.260514    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:15:43.272187    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:15:43.272198    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:15:43.287121    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:15:43.287133    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:15:43.298681    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:15:43.298693    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:15:43.311940    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:15:43.311955    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:15:43.324419    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:15:43.324429    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:15:43.338417    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:15:43.338430    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:15:43.352989    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:15:43.353001    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:15:43.373780    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:15:43.373789    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:15:43.392370    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:15:43.392381    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:15:43.429541    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:15:43.429549    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:15:45.935943    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:50.936414    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:50.936575    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:15:50.948515    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:15:50.948591    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:15:50.958592    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:15:50.958663    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:15:50.969091    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:15:50.969158    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:15:50.979744    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:15:50.979813    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:15:50.990343    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:15:50.990415    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:15:51.001055    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:15:51.001128    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:15:51.011195    4223 logs.go:276] 0 containers: []
	W0307 10:15:51.011208    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:15:51.011263    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:15:51.021805    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:15:51.021822    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:15:51.021827    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:15:51.033433    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:15:51.033444    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:15:51.050615    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:15:51.050625    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:15:51.062841    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:15:51.062853    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:15:51.076464    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:15:51.076474    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:15:51.080793    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:15:51.080802    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:15:51.094645    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:15:51.094654    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:15:51.109046    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:15:51.109057    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:15:51.123781    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:15:51.123792    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:15:51.135055    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:15:51.135067    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:15:51.170878    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:15:51.170886    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:15:51.205027    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:15:51.205037    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:15:51.219774    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:15:51.219786    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:15:51.239068    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:15:51.239079    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:15:51.251364    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:15:51.251376    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:15:51.263161    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:15:51.263176    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:15:51.274092    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:15:51.274103    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:15:53.797717    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:58.799933    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:58.800242    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:15:58.830672    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:15:58.830769    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:15:58.855159    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:15:58.855244    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:15:58.873702    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:15:58.873778    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:15:58.887539    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:15:58.887608    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:15:58.898031    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:15:58.898105    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:15:58.909653    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:15:58.909729    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:15:58.920390    4223 logs.go:276] 0 containers: []
	W0307 10:15:58.920403    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:15:58.920463    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:15:58.930490    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:15:58.930507    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:15:58.930513    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:15:58.948761    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:15:58.948771    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:15:58.960163    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:15:58.960174    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:15:58.971387    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:15:58.971402    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:15:58.993147    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:15:58.993159    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:15:59.016593    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:15:59.016602    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:15:59.030913    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:15:59.030926    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:15:59.043081    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:15:59.043094    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:15:59.081011    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:15:59.081021    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:15:59.096242    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:15:59.096254    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:15:59.108132    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:15:59.108146    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:15:59.118715    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:15:59.118726    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:15:59.123073    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:15:59.123079    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:15:59.158804    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:15:59.158815    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:15:59.179109    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:15:59.179119    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:15:59.193721    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:15:59.193732    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:15:59.205620    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:15:59.205632    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:16:01.718841    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:06.720975    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:06.721064    4223 kubeadm.go:591] duration metric: took 4m3.932536792s to restartPrimaryControlPlane
	W0307 10:16:06.721154    4223 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0307 10:16:06.721177    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0307 10:16:07.668176    4223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 10:16:07.673160    4223 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 10:16:07.676140    4223 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 10:16:07.678918    4223 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 10:16:07.678923    4223 kubeadm.go:156] found existing configuration files:
	
	I0307 10:16:07.678945    4223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/admin.conf
	I0307 10:16:07.681430    4223 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 10:16:07.681455    4223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 10:16:07.684216    4223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/kubelet.conf
	I0307 10:16:07.686834    4223 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 10:16:07.686859    4223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 10:16:07.689408    4223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/controller-manager.conf
	I0307 10:16:07.692382    4223 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 10:16:07.692408    4223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 10:16:07.695175    4223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/scheduler.conf
	I0307 10:16:07.697702    4223 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 10:16:07.697726    4223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 10:16:07.700451    4223 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 10:16:07.719257    4223 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0307 10:16:07.719389    4223 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 10:16:07.773563    4223 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 10:16:07.773620    4223 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 10:16:07.773685    4223 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 10:16:07.823709    4223 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 10:16:07.827952    4223 out.go:204]   - Generating certificates and keys ...
	I0307 10:16:07.827992    4223 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 10:16:07.828037    4223 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 10:16:07.828074    4223 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0307 10:16:07.828108    4223 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0307 10:16:07.828146    4223 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0307 10:16:07.828178    4223 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0307 10:16:07.828207    4223 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0307 10:16:07.828236    4223 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0307 10:16:07.828277    4223 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0307 10:16:07.828314    4223 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0307 10:16:07.828343    4223 kubeadm.go:309] [certs] Using the existing "sa" key
	I0307 10:16:07.828374    4223 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 10:16:07.853310    4223 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 10:16:08.047609    4223 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 10:16:08.125670    4223 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 10:16:08.196261    4223 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 10:16:08.226684    4223 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 10:16:08.228561    4223 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 10:16:08.228600    4223 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 10:16:08.320400    4223 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 10:16:08.324518    4223 out.go:204]   - Booting up control plane ...
	I0307 10:16:08.324613    4223 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 10:16:08.324661    4223 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 10:16:08.324694    4223 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 10:16:08.324740    4223 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 10:16:08.324825    4223 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 10:16:12.325090    4223 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.005091 seconds
	I0307 10:16:12.325149    4223 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 10:16:12.329092    4223 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 10:16:12.841484    4223 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 10:16:12.841678    4223 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-064000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 10:16:13.348591    4223 kubeadm.go:309] [bootstrap-token] Using token: li2qax.cp8x61rj2vxpi2xh
	I0307 10:16:13.354908    4223 out.go:204]   - Configuring RBAC rules ...
	I0307 10:16:13.354984    4223 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 10:16:13.355057    4223 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 10:16:13.357851    4223 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 10:16:13.359134    4223 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 10:16:13.360203    4223 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 10:16:13.361232    4223 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 10:16:13.365153    4223 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 10:16:13.547157    4223 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 10:16:13.753436    4223 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 10:16:13.754011    4223 kubeadm.go:309] 
	I0307 10:16:13.754046    4223 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 10:16:13.754051    4223 kubeadm.go:309] 
	I0307 10:16:13.754091    4223 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 10:16:13.754108    4223 kubeadm.go:309] 
	I0307 10:16:13.754125    4223 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 10:16:13.754168    4223 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 10:16:13.754201    4223 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 10:16:13.754206    4223 kubeadm.go:309] 
	I0307 10:16:13.754238    4223 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 10:16:13.754243    4223 kubeadm.go:309] 
	I0307 10:16:13.754269    4223 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 10:16:13.754273    4223 kubeadm.go:309] 
	I0307 10:16:13.754304    4223 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 10:16:13.754344    4223 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 10:16:13.754399    4223 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 10:16:13.754405    4223 kubeadm.go:309] 
	I0307 10:16:13.754459    4223 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 10:16:13.754501    4223 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 10:16:13.754504    4223 kubeadm.go:309] 
	I0307 10:16:13.754564    4223 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token li2qax.cp8x61rj2vxpi2xh \
	I0307 10:16:13.754624    4223 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4f3f709732e35797580c9b8d11f82ef6c52734bdc7940106dd5836141654d720 \
	I0307 10:16:13.754637    4223 kubeadm.go:309] 	--control-plane 
	I0307 10:16:13.754642    4223 kubeadm.go:309] 
	I0307 10:16:13.754704    4223 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 10:16:13.754707    4223 kubeadm.go:309] 
	I0307 10:16:13.754748    4223 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token li2qax.cp8x61rj2vxpi2xh \
	I0307 10:16:13.754818    4223 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4f3f709732e35797580c9b8d11f82ef6c52734bdc7940106dd5836141654d720 
	I0307 10:16:13.754873    4223 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 10:16:13.754879    4223 cni.go:84] Creating CNI manager for ""
	I0307 10:16:13.754887    4223 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:16:13.762532    4223 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0307 10:16:13.766579    4223 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0307 10:16:13.769558    4223 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0307 10:16:13.776816    4223 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 10:16:13.776886    4223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-064000 minikube.k8s.io/updated_at=2024_03_07T10_16_13_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c2be9b5c96c7962d271300acacf405d2402b272f minikube.k8s.io/name=running-upgrade-064000 minikube.k8s.io/primary=true
	I0307 10:16:13.776889    4223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 10:16:13.787598    4223 ops.go:34] apiserver oom_adj: -16
	I0307 10:16:13.822210    4223 kubeadm.go:1106] duration metric: took 45.370792ms to wait for elevateKubeSystemPrivileges
	W0307 10:16:13.822249    4223 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 10:16:13.822254    4223 kubeadm.go:393] duration metric: took 4m11.047378083s to StartCluster
	I0307 10:16:13.822262    4223 settings.go:142] acquiring lock: {Name:mke72688bb63f8128eac153bbf90929d78ec9d29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:16:13.822563    4223 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:16:13.823044    4223 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/kubeconfig: {Name:mkeef9e7922e618c2ac8219607b646aeaf5f61cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:16:13.823239    4223 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:16:13.826610    4223 out.go:177] * Verifying Kubernetes components...
	I0307 10:16:13.823291    4223 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 10:16:13.823405    4223 config.go:182] Loaded profile config "running-upgrade-064000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:16:13.834501    4223 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-064000"
	I0307 10:16:13.834517    4223 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-064000"
	W0307 10:16:13.834521    4223 addons.go:243] addon storage-provisioner should already be in state true
	I0307 10:16:13.834537    4223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:16:13.834538    4223 host.go:66] Checking if "running-upgrade-064000" exists ...
	I0307 10:16:13.834521    4223 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-064000"
	I0307 10:16:13.834632    4223 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-064000"
	I0307 10:16:13.836033    4223 kapi.go:59] client config for running-upgrade-064000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/client.key", CAFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10225f6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 10:16:13.836242    4223 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-064000"
	W0307 10:16:13.836248    4223 addons.go:243] addon default-storageclass should already be in state true
	I0307 10:16:13.836254    4223 host.go:66] Checking if "running-upgrade-064000" exists ...
	I0307 10:16:13.840531    4223 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:16:13.843543    4223 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 10:16:13.843550    4223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 10:16:13.843557    4223 sshutil.go:53] new ssh client: &{IP:localhost Port:50273 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/running-upgrade-064000/id_rsa Username:docker}
	I0307 10:16:13.844303    4223 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 10:16:13.844307    4223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 10:16:13.844311    4223 sshutil.go:53] new ssh client: &{IP:localhost Port:50273 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/running-upgrade-064000/id_rsa Username:docker}
	I0307 10:16:13.924513    4223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 10:16:13.930599    4223 api_server.go:52] waiting for apiserver process to appear ...
	I0307 10:16:13.930654    4223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:16:13.934721    4223 api_server.go:72] duration metric: took 111.475041ms to wait for apiserver process to appear ...
	I0307 10:16:13.934728    4223 api_server.go:88] waiting for apiserver healthz status ...
	I0307 10:16:13.934735    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:13.980753    4223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 10:16:13.982533    4223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 10:16:18.936753    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:18.936859    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:23.937410    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:23.937439    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:28.937724    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:28.937747    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:33.938147    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:33.938193    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:38.938901    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:38.938941    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:43.939882    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:43.939926    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0307 10:16:44.338360    4223 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0307 10:16:44.341627    4223 out.go:177] * Enabled addons: storage-provisioner
	I0307 10:16:44.350551    4223 addons.go:505] duration metric: took 30.528298041s for enable addons: enabled=[storage-provisioner]
	I0307 10:16:48.941054    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:48.941077    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:53.942505    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:53.942552    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:58.944481    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:58.944515    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:03.946625    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:03.946664    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:08.948777    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:08.948820    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:13.950952    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:13.951113    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:13.966222    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:17:13.966303    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:13.977103    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:17:13.977170    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:13.988116    4223 logs.go:276] 2 containers: [8f40abedda95 b646ef99863b]
	I0307 10:17:13.988189    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:14.005692    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:17:14.005764    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:14.016923    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:17:14.016996    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:14.034372    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:17:14.034455    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:14.056723    4223 logs.go:276] 0 containers: []
	W0307 10:17:14.056741    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:14.056814    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:14.069659    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:17:14.069677    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:17:14.069683    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:17:14.088664    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:14.088674    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:14.112269    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:14.112277    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:14.116893    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:17:14.116899    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:17:14.131326    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:17:14.131343    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:17:14.142910    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:17:14.142922    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:17:14.154268    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:17:14.154280    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:17:14.169135    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:17:14.169148    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:17:14.180307    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:17:14.180317    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:17:14.192124    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:17:14.192135    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:14.203502    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:14.203514    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:17:14.237030    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:17:14.237124    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:17:14.238065    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:14.238072    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:14.273124    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:17:14.273142    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:17:14.287819    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:17:14.287829    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:17:14.287858    4223 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 10:17:14.287863    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:17:14.287867    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:17:14.287891    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:17:14.287896    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:17:24.291783    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:29.294125    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:29.294249    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:29.306450    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:17:29.306524    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:29.320220    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:17:29.320295    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:29.330851    4223 logs.go:276] 2 containers: [8f40abedda95 b646ef99863b]
	I0307 10:17:29.330923    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:29.341262    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:17:29.341330    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:29.352256    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:17:29.352333    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:29.362874    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:17:29.362938    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:29.373094    4223 logs.go:276] 0 containers: []
	W0307 10:17:29.373104    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:29.373155    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:29.390619    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:17:29.390633    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:17:29.390638    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:17:29.404637    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:17:29.404650    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:17:29.418640    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:17:29.418651    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:17:29.429788    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:17:29.429804    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:17:29.441040    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:29.441050    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:29.465471    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:17:29.465478    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:29.477468    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:17:29.477479    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:17:29.488871    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:29.488882    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:17:29.523842    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:17:29.523936    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:17:29.524890    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:29.524895    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:29.529115    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:29.529120    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:29.573917    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:17:29.573929    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:17:29.589783    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:17:29.589795    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:17:29.601565    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:17:29.601579    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:17:29.622309    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:17:29.622319    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:17:29.622344    4223 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 10:17:29.622348    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:17:29.622352    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:17:29.622356    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:17:29.622359    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:17:39.625301    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:44.626488    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:44.626738    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:44.653101    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:17:44.653219    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:44.670687    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:17:44.670769    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:44.683946    4223 logs.go:276] 2 containers: [8f40abedda95 b646ef99863b]
	I0307 10:17:44.684008    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:44.695518    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:17:44.695588    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:44.705818    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:17:44.705880    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:44.716512    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:17:44.716583    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:44.726740    4223 logs.go:276] 0 containers: []
	W0307 10:17:44.726755    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:44.726810    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:44.737431    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:17:44.737447    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:17:44.737452    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:17:44.749244    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:17:44.749258    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:17:44.763946    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:17:44.763957    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:17:44.775528    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:17:44.775541    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:17:44.787098    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:17:44.787110    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:44.798369    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:44.798378    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:44.803177    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:17:44.803182    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:17:44.817832    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:17:44.817843    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:17:44.836932    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:17:44.836946    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:17:44.848652    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:17:44.848666    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:17:44.866674    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:44.866686    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:44.890602    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:44.890611    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:17:44.923775    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:17:44.923870    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:17:44.924823    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:44.924831    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:44.959701    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:17:44.959716    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:17:44.959745    4223 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 10:17:44.959752    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:17:44.959755    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:17:44.959759    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:17:44.959762    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:17:54.963612    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:59.965853    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:59.966000    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:59.982370    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:17:59.982464    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:59.994584    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:17:59.994656    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:00.006069    4223 logs.go:276] 2 containers: [8f40abedda95 b646ef99863b]
	I0307 10:18:00.006138    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:00.016628    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:18:00.016702    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:00.027213    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:18:00.027276    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:00.037929    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:18:00.038008    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:00.048034    4223 logs.go:276] 0 containers: []
	W0307 10:18:00.048045    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:00.048105    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:00.058943    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:18:00.058959    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:00.058965    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:00.063348    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:00.063356    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:00.099137    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:18:00.099150    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:18:00.111760    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:18:00.111772    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:18:00.124687    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:18:00.124698    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:18:00.139065    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:18:00.139078    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:18:00.151082    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:00.151095    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:00.173945    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:18:00.173955    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:00.188691    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:00.188702    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:18:00.223565    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:18:00.223658    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:18:00.224577    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:18:00.224583    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:18:00.238946    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:18:00.238960    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:18:00.252834    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:18:00.252845    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:18:00.265164    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:18:00.265176    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:18:00.282236    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:18:00.282246    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:18:00.282270    4223 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 10:18:00.282275    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:18:00.282278    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:18:00.282283    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:18:00.282286    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:18:10.286131    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:15.287087    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:15.287255    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:15.298826    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:18:15.298900    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:15.314385    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:18:15.314462    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:15.326676    4223 logs.go:276] 2 containers: [8f40abedda95 b646ef99863b]
	I0307 10:18:15.326755    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:15.337596    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:18:15.337661    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:15.349009    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:18:15.349082    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:15.359905    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:18:15.359974    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:15.369928    4223 logs.go:276] 0 containers: []
	W0307 10:18:15.369941    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:15.369999    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:15.380172    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:18:15.380190    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:15.380196    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:15.385034    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:15.385044    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:15.420164    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:18:15.420178    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:18:15.434711    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:18:15.434722    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:18:15.446693    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:18:15.446705    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:18:15.458130    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:18:15.458138    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:18:15.478732    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:18:15.478744    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:15.490254    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:15.490265    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:18:15.524451    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:18:15.524548    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:18:15.525478    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:18:15.525485    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:18:15.544408    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:18:15.544419    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:18:15.559796    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:18:15.559806    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:18:15.572243    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:18:15.572255    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:18:15.593903    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:15.593913    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:15.618595    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:18:15.618607    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:18:15.618630    4223 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 10:18:15.618634    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:18:15.618638    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:18:15.618642    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:18:15.618645    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:18:25.622500    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:30.624749    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:30.625189    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:30.664486    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:18:30.664625    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:30.686476    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:18:30.686577    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:30.702444    4223 logs.go:276] 4 containers: [30ac341fe864 78686b00c83b 8f40abedda95 b646ef99863b]
	I0307 10:18:30.702525    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:30.714800    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:18:30.714874    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:30.725834    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:18:30.725911    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:30.736324    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:18:30.736409    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:30.746907    4223 logs.go:276] 0 containers: []
	W0307 10:18:30.746923    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:30.746978    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:30.757653    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:18:30.757669    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:18:30.757676    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:18:30.771346    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:18:30.771357    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:18:30.782865    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:30.782877    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:18:30.816704    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:18:30.816797    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:18:30.817696    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:30.817702    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:30.853340    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:18:30.853352    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:18:30.865756    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:18:30.865768    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:30.877399    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:18:30.877414    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:18:30.891788    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:18:30.891801    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:18:30.903559    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:18:30.903573    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:18:30.918072    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:18:30.918082    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:18:30.930014    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:18:30.930028    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:18:30.948661    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:30.948670    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:30.971843    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:30.971851    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:30.976332    4223 logs.go:123] Gathering logs for coredns [30ac341fe864] ...
	I0307 10:18:30.976340    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30ac341fe864"
	I0307 10:18:30.987727    4223 logs.go:123] Gathering logs for coredns [78686b00c83b] ...
	I0307 10:18:30.987742    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78686b00c83b"
	I0307 10:18:30.999188    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:18:30.999204    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:18:30.999230    4223 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 10:18:30.999235    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:18:30.999239    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:18:30.999243    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:18:30.999245    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:18:41.003051    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:46.005289    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:46.005490    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:46.027688    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:18:46.027811    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:46.044365    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:18:46.044443    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:46.057048    4223 logs.go:276] 4 containers: [30ac341fe864 78686b00c83b 8f40abedda95 b646ef99863b]
	I0307 10:18:46.057123    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:46.079140    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:18:46.079213    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:46.092176    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:18:46.092244    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:46.102843    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:18:46.102913    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:46.113318    4223 logs.go:276] 0 containers: []
	W0307 10:18:46.113330    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:46.113387    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:46.123496    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:18:46.123513    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:18:46.123519    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:18:46.137941    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:46.137955    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:18:46.171640    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:18:46.171732    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:18:46.172661    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:18:46.172666    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:18:46.184698    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:18:46.184709    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:18:46.199439    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:18:46.199451    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:18:46.223310    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:18:46.223321    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:46.234691    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:18:46.234703    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:18:46.249064    4223 logs.go:123] Gathering logs for coredns [30ac341fe864] ...
	I0307 10:18:46.249074    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30ac341fe864"
	I0307 10:18:46.260559    4223 logs.go:123] Gathering logs for coredns [78686b00c83b] ...
	I0307 10:18:46.260574    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78686b00c83b"
	I0307 10:18:46.271766    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:18:46.271779    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:18:46.283217    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:18:46.283228    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:18:46.295200    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:46.295210    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:46.320441    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:46.320451    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:46.324700    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:46.324705    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:46.358718    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:18:46.358732    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:18:46.370876    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:18:46.370886    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:18:46.370911    4223 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 10:18:46.370916    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:18:46.370921    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:18:46.370925    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:18:46.370928    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:18:56.373869    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:01.376144    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:01.376498    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:19:01.410712    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:19:01.410845    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:19:01.431213    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:19:01.431324    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:19:01.446088    4223 logs.go:276] 4 containers: [30ac341fe864 78686b00c83b 8f40abedda95 b646ef99863b]
	I0307 10:19:01.446171    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:19:01.462299    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:19:01.462367    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:19:01.474530    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:19:01.474598    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:19:01.485803    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:19:01.485874    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:19:01.496253    4223 logs.go:276] 0 containers: []
	W0307 10:19:01.496263    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:19:01.496315    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:19:01.506956    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:19:01.506976    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:19:01.506981    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:19:01.541687    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:19:01.541786    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:19:01.542709    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:19:01.542715    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:19:01.561426    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:19:01.561437    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:19:01.572928    4223 logs.go:123] Gathering logs for coredns [30ac341fe864] ...
	I0307 10:19:01.572941    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30ac341fe864"
	I0307 10:19:01.584969    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:19:01.584980    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:19:01.605009    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:19:01.605022    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:19:01.616548    4223 logs.go:123] Gathering logs for coredns [78686b00c83b] ...
	I0307 10:19:01.616561    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78686b00c83b"
	I0307 10:19:01.628466    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:19:01.628476    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:19:01.640526    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:19:01.640535    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:19:01.651748    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:19:01.651762    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:19:01.664170    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:19:01.664183    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:19:01.681337    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:19:01.681347    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:19:01.686126    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:19:01.686133    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:19:01.723608    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:19:01.723624    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:19:01.738328    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:19:01.738339    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:19:01.764147    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:19:01.764155    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:19:01.764179    4223 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 10:19:01.764183    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:19:01.764199    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:19:01.764205    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:19:01.764209    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:19:11.766945    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:16.769191    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:16.769362    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:19:16.794046    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:19:16.794159    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:19:16.810002    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:19:16.810084    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:19:16.822568    4223 logs.go:276] 4 containers: [30ac341fe864 78686b00c83b 8f40abedda95 b646ef99863b]
	I0307 10:19:16.822646    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:19:16.833471    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:19:16.833535    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:19:16.844175    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:19:16.844238    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:19:16.854970    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:19:16.855028    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:19:16.864935    4223 logs.go:276] 0 containers: []
	W0307 10:19:16.864947    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:19:16.865005    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:19:16.874981    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:19:16.875000    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:19:16.875006    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:19:16.887119    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:19:16.887131    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:19:16.900410    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:19:16.900422    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:19:16.937856    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:19:16.937955    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:19:16.938918    4223 logs.go:123] Gathering logs for coredns [78686b00c83b] ...
	I0307 10:19:16.938930    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78686b00c83b"
	I0307 10:19:16.950681    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:19:16.950691    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:19:16.975448    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:19:16.975462    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:19:16.987176    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:19:16.987187    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:19:17.022534    4223 logs.go:123] Gathering logs for coredns [30ac341fe864] ...
	I0307 10:19:17.022548    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30ac341fe864"
	I0307 10:19:17.039156    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:19:17.039167    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:19:17.050737    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:19:17.050748    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:19:17.062579    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:19:17.062589    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:19:17.079358    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:19:17.079368    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:19:17.098185    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:19:17.098193    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:19:17.102635    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:19:17.102643    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:19:17.117363    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:19:17.117375    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:19:17.130935    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:19:17.130944    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:19:17.130968    4223 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 10:19:17.130972    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:19:17.130976    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:19:17.130979    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:19:17.130983    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:19:27.134778    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:32.136914    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:32.137158    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:19:32.158817    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:19:32.158914    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:19:32.174503    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:19:32.174579    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:19:32.185643    4223 logs.go:276] 4 containers: [30ac341fe864 78686b00c83b 8f40abedda95 b646ef99863b]
	I0307 10:19:32.185718    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:19:32.219834    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:19:32.219910    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:19:32.233826    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:19:32.233898    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:19:32.244538    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:19:32.244606    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:19:32.258843    4223 logs.go:276] 0 containers: []
	W0307 10:19:32.258858    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:19:32.258917    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:19:32.269264    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:19:32.269282    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:19:32.269288    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:19:32.273886    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:19:32.273893    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:19:32.287903    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:19:32.287913    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:19:32.302923    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:19:32.302934    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:19:32.326147    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:19:32.326156    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:19:32.359345    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:19:32.359438    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:19:32.360338    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:19:32.360343    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:19:32.374097    4223 logs.go:123] Gathering logs for coredns [78686b00c83b] ...
	I0307 10:19:32.374108    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78686b00c83b"
	I0307 10:19:32.386120    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:19:32.386135    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:19:32.398300    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:19:32.398310    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:19:32.434768    4223 logs.go:123] Gathering logs for coredns [30ac341fe864] ...
	I0307 10:19:32.434779    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30ac341fe864"
	I0307 10:19:32.446747    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:19:32.446758    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:19:32.469263    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:19:32.469275    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:19:32.481066    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:19:32.481079    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:19:32.492968    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:19:32.492981    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:19:32.505218    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:19:32.505229    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:19:32.516548    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:19:32.516558    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:19:32.516584    4223 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 10:19:32.516592    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:19:32.516595    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:19:32.516599    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:19:32.516602    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:19:42.520406    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:47.522471    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:47.522621    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:19:47.537647    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:19:47.537733    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:19:47.549809    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:19:47.549878    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:19:47.560133    4223 logs.go:276] 4 containers: [30ac341fe864 78686b00c83b 8f40abedda95 b646ef99863b]
	I0307 10:19:47.560212    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:19:47.570497    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:19:47.570568    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:19:47.581358    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:19:47.581423    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:19:47.591867    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:19:47.591942    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:19:47.611285    4223 logs.go:276] 0 containers: []
	W0307 10:19:47.611296    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:19:47.611354    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:19:47.622392    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:19:47.622409    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:19:47.622415    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:19:47.657219    4223 logs.go:123] Gathering logs for coredns [30ac341fe864] ...
	I0307 10:19:47.657230    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30ac341fe864"
	I0307 10:19:47.669600    4223 logs.go:123] Gathering logs for coredns [78686b00c83b] ...
	I0307 10:19:47.669611    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78686b00c83b"
	I0307 10:19:47.689172    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:19:47.689183    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:19:47.701367    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:19:47.701376    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:19:47.725309    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:19:47.725317    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:19:47.741215    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:19:47.741229    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:19:47.752675    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:19:47.752685    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:19:47.757064    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:19:47.757075    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:19:47.771442    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:19:47.771455    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:19:47.785575    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:19:47.785587    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:19:47.797290    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:19:47.797301    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:19:47.831262    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:19:47.831357    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:19:47.832259    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:19:47.832266    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:19:47.846898    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:19:47.846909    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:19:47.864560    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:19:47.864569    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:19:47.876378    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:19:47.876388    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:19:47.876414    4223 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 10:19:47.876418    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:19:47.876422    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:19:47.876426    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:19:47.876429    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:19:57.880292    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:20:02.882425    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:20:02.882678    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:20:02.906726    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:20:02.906843    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:20:02.923666    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:20:02.923737    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:20:02.936749    4223 logs.go:276] 4 containers: [30ac341fe864 78686b00c83b 8f40abedda95 b646ef99863b]
	I0307 10:20:02.936832    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:20:02.953028    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:20:02.953098    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:20:02.963778    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:20:02.963849    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:20:02.974543    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:20:02.974620    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:20:02.984389    4223 logs.go:276] 0 containers: []
	W0307 10:20:02.984405    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:20:02.984459    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:20:02.995567    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:20:02.995585    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:20:02.995590    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:20:03.007181    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:20:03.007194    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:20:03.040896    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:20:03.040989    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:20:03.041917    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:20:03.041925    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:20:03.046171    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:20:03.046177    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:20:03.060423    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:20:03.060434    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:20:03.072838    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:20:03.072852    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:20:03.084584    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:20:03.084595    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:20:03.103523    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:20:03.103533    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:20:03.138298    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:20:03.138311    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:20:03.155889    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:20:03.155903    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:20:03.181402    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:20:03.181415    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:20:03.199621    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:20:03.199632    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:20:03.210365    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:20:03.210375    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:20:03.239986    4223 logs.go:123] Gathering logs for coredns [30ac341fe864] ...
	I0307 10:20:03.239997    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30ac341fe864"
	I0307 10:20:03.251003    4223 logs.go:123] Gathering logs for coredns [78686b00c83b] ...
	I0307 10:20:03.251013    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78686b00c83b"
	I0307 10:20:03.262188    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:20:03.262200    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:20:03.262225    4223 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 10:20:03.262231    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:20:03.262234    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	  Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:20:03.262240    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:20:03.262243    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:20:13.266097    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:20:18.268228    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:20:18.272573    4223 out.go:177] 
	W0307 10:20:18.276440    4223 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0307 10:20:18.276447    4223 out.go:239] * 
	* 
	W0307 10:20:18.276914    4223 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:20:18.286488    4223 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-064000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-03-07 10:20:18.355463 -0800 PST m=+3082.198235209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-064000 -n running-upgrade-064000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-064000 -n running-upgrade-064000: exit status 2 (15.5989825s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-064000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-434000          | force-systemd-flag-434000 | jenkins | v1.32.0 | 07 Mar 24 10:10 PST |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-411000              | force-systemd-env-411000  | jenkins | v1.32.0 | 07 Mar 24 10:10 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-411000           | force-systemd-env-411000  | jenkins | v1.32.0 | 07 Mar 24 10:10 PST | 07 Mar 24 10:10 PST |
	| start   | -p docker-flags-256000                | docker-flags-256000       | jenkins | v1.32.0 | 07 Mar 24 10:10 PST |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-434000             | force-systemd-flag-434000 | jenkins | v1.32.0 | 07 Mar 24 10:10 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-434000          | force-systemd-flag-434000 | jenkins | v1.32.0 | 07 Mar 24 10:10 PST | 07 Mar 24 10:10 PST |
	| start   | -p cert-expiration-259000             | cert-expiration-259000    | jenkins | v1.32.0 | 07 Mar 24 10:10 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-256000 ssh               | docker-flags-256000       | jenkins | v1.32.0 | 07 Mar 24 10:10 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-256000 ssh               | docker-flags-256000       | jenkins | v1.32.0 | 07 Mar 24 10:10 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-256000                | docker-flags-256000       | jenkins | v1.32.0 | 07 Mar 24 10:10 PST | 07 Mar 24 10:10 PST |
	| start   | -p cert-options-521000                | cert-options-521000       | jenkins | v1.32.0 | 07 Mar 24 10:10 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-521000 ssh               | cert-options-521000       | jenkins | v1.32.0 | 07 Mar 24 10:10 PST |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-521000 -- sudo        | cert-options-521000       | jenkins | v1.32.0 | 07 Mar 24 10:10 PST |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-521000                | cert-options-521000       | jenkins | v1.32.0 | 07 Mar 24 10:10 PST | 07 Mar 24 10:10 PST |
	| start   | -p running-upgrade-064000             | minikube                  | jenkins | v1.26.0 | 07 Mar 24 10:10 PST | 07 Mar 24 10:11 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-064000             | running-upgrade-064000    | jenkins | v1.32.0 | 07 Mar 24 10:11 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-259000             | cert-expiration-259000    | jenkins | v1.32.0 | 07 Mar 24 10:13 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-259000             | cert-expiration-259000    | jenkins | v1.32.0 | 07 Mar 24 10:13 PST | 07 Mar 24 10:13 PST |
	| start   | -p kubernetes-upgrade-726000          | kubernetes-upgrade-726000 | jenkins | v1.32.0 | 07 Mar 24 10:13 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-726000          | kubernetes-upgrade-726000 | jenkins | v1.32.0 | 07 Mar 24 10:13 PST | 07 Mar 24 10:13 PST |
	| start   | -p kubernetes-upgrade-726000          | kubernetes-upgrade-726000 | jenkins | v1.32.0 | 07 Mar 24 10:13 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-726000          | kubernetes-upgrade-726000 | jenkins | v1.32.0 | 07 Mar 24 10:13 PST | 07 Mar 24 10:13 PST |
	| start   | -p stopped-upgrade-853000             | minikube                  | jenkins | v1.26.0 | 07 Mar 24 10:13 PST | 07 Mar 24 10:14 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-853000 stop           | minikube                  | jenkins | v1.26.0 | 07 Mar 24 10:14 PST | 07 Mar 24 10:14 PST |
	| start   | -p stopped-upgrade-853000             | stopped-upgrade-853000    | jenkins | v1.32.0 | 07 Mar 24 10:14 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 10:14:46
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 10:14:46.746354    4364 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:14:46.746523    4364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:14:46.746527    4364 out.go:304] Setting ErrFile to fd 2...
	I0307 10:14:46.746530    4364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:14:46.746682    4364 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:14:46.747830    4364 out.go:298] Setting JSON to false
	I0307 10:14:46.766743    4364 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4458,"bootTime":1709830828,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:14:46.766804    4364 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:14:46.770999    4364 out.go:177] * [stopped-upgrade-853000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:14:46.777036    4364 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:14:46.777081    4364 notify.go:220] Checking for updates...
	I0307 10:14:46.784897    4364 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:14:46.788028    4364 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:14:46.791034    4364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:14:46.794005    4364 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:14:46.797025    4364 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:14:46.800309    4364 config.go:182] Loaded profile config "stopped-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:14:46.803963    4364 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0307 10:14:46.807038    4364 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:14:46.810923    4364 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 10:14:46.817999    4364 start.go:297] selected driver: qemu2
	I0307 10:14:46.818006    4364 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-853000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 10:14:46.818053    4364 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:14:46.820685    4364 cni.go:84] Creating CNI manager for ""
	I0307 10:14:46.820708    4364 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:14:46.820736    4364 start.go:340] cluster config:
	{Name:stopped-upgrade-853000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 10:14:46.820791    4364 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:14:46.822665    4364 out.go:177] * Starting "stopped-upgrade-853000" primary control-plane node in "stopped-upgrade-853000" cluster
	I0307 10:14:46.826908    4364 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0307 10:14:46.826924    4364 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0307 10:14:46.826933    4364 cache.go:56] Caching tarball of preloaded images
	I0307 10:14:46.826989    4364 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:14:46.826994    4364 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0307 10:14:46.827051    4364 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/config.json ...
	I0307 10:14:46.827341    4364 start.go:360] acquireMachinesLock for stopped-upgrade-853000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:14:46.827374    4364 start.go:364] duration metric: took 26.459µs to acquireMachinesLock for "stopped-upgrade-853000"
	I0307 10:14:46.827383    4364 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:14:46.827386    4364 fix.go:54] fixHost starting: 
	I0307 10:14:46.827502    4364 fix.go:112] recreateIfNeeded on stopped-upgrade-853000: state=Stopped err=<nil>
	W0307 10:14:46.827510    4364 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:14:46.835946    4364 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-853000" ...
	I0307 10:14:47.550497    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:14:47.550657    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:14:47.571762    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:14:47.571863    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:14:47.586862    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:14:47.586944    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:14:47.598129    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:14:47.598191    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:14:47.609121    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:14:47.609192    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:14:47.619508    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:14:47.619576    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:14:47.630193    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:14:47.630261    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:14:47.639997    4223 logs.go:276] 0 containers: []
	W0307 10:14:47.640009    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:14:47.640067    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:14:47.650482    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:14:47.650502    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:14:47.650507    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:14:46.840054    4364 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50483-:22,hostfwd=tcp::50484-:2376,hostname=stopped-upgrade-853000 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/disk.qcow2
	I0307 10:14:46.882479    4364 main.go:141] libmachine: STDOUT: 
	I0307 10:14:46.882518    4364 main.go:141] libmachine: STDERR: 
	I0307 10:14:46.882524    4364 main.go:141] libmachine: Waiting for VM to start (ssh -p 50483 docker@127.0.0.1)...
	I0307 10:14:47.664002    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:14:47.664012    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:14:47.677904    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:14:47.677913    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:14:47.695252    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:14:47.695264    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:14:47.719041    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:14:47.719048    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:14:47.738371    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:14:47.738382    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:14:47.752479    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:14:47.752489    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:14:47.767654    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:14:47.767668    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:14:47.771964    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:14:47.771973    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:14:47.806592    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:14:47.806605    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:14:47.818228    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:14:47.818241    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:14:47.829443    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:14:47.829456    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:14:47.841053    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:14:47.841068    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:14:47.878926    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:14:47.878935    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:14:47.889910    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:14:47.889921    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:14:47.901005    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:14:47.901018    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:14:47.921702    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:14:47.921712    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:14:50.439130    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:14:55.441643    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:14:55.441748    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:14:55.452679    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:14:55.452761    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:14:55.463475    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:14:55.463555    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:14:55.478713    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:14:55.478778    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:14:55.489743    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:14:55.489804    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:14:55.500247    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:14:55.500312    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:14:55.510824    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:14:55.510891    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:14:55.525096    4223 logs.go:276] 0 containers: []
	W0307 10:14:55.525108    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:14:55.525163    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:14:55.535849    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:14:55.535866    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:14:55.535872    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:14:55.547581    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:14:55.547593    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:14:55.562830    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:14:55.562841    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:14:55.576190    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:14:55.576202    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:14:55.580723    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:14:55.580730    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:14:55.594982    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:14:55.594994    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:14:55.606485    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:14:55.606496    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:14:55.627090    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:14:55.627100    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:14:55.665610    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:14:55.665617    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:14:55.685139    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:14:55.685149    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:14:55.696948    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:14:55.696960    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:14:55.708385    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:14:55.708396    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:14:55.731931    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:14:55.731939    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:14:55.743710    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:14:55.743723    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:14:55.757958    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:14:55.757969    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:14:55.775695    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:14:55.775707    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:14:55.788917    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:14:55.788927    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:14:58.327314    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:03.327677    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:03.328065    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:15:03.368845    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:15:03.368977    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:15:03.400622    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:15:03.400707    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:15:03.416248    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:15:03.416333    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:15:03.435993    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:15:03.436072    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:15:03.449121    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:15:03.449203    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:15:03.459637    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:15:03.459732    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:15:03.471032    4223 logs.go:276] 0 containers: []
	W0307 10:15:03.471047    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:15:03.471115    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:15:03.482778    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:15:03.482797    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:15:03.482803    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:15:03.500442    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:15:03.500453    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:15:03.536823    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:15:03.536834    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:15:03.541615    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:15:03.541622    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:15:03.553562    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:15:03.553576    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:15:03.595974    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:15:03.595987    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:15:03.610148    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:15:03.610161    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:15:03.622094    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:15:03.622105    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:15:03.633230    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:15:03.633241    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:15:03.648337    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:15:03.648349    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:15:03.665690    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:15:03.665701    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:15:03.678835    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:15:03.678848    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:15:03.690297    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:15:03.690310    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:15:03.702520    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:15:03.702530    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:15:03.726327    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:15:03.726336    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:15:03.737992    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:15:03.738003    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:15:03.752043    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:15:03.752054    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:15:06.271410    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:06.790483    4364 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/config.json ...
	I0307 10:15:06.791339    4364 machine.go:94] provisionDockerMachine start ...
	I0307 10:15:06.791528    4364 main.go:141] libmachine: Using SSH client type: native
	I0307 10:15:06.792083    4364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003eda30] 0x1003f0290 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0307 10:15:06.792100    4364 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 10:15:06.874483    4364 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0307 10:15:06.874510    4364 buildroot.go:166] provisioning hostname "stopped-upgrade-853000"
	I0307 10:15:06.874631    4364 main.go:141] libmachine: Using SSH client type: native
	I0307 10:15:06.874818    4364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003eda30] 0x1003f0290 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0307 10:15:06.874829    4364 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-853000 && echo "stopped-upgrade-853000" | sudo tee /etc/hostname
	I0307 10:15:06.948501    4364 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-853000
	
	I0307 10:15:06.948579    4364 main.go:141] libmachine: Using SSH client type: native
	I0307 10:15:06.948733    4364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003eda30] 0x1003f0290 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0307 10:15:06.948744    4364 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-853000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-853000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-853000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 10:15:07.012571    4364 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 10:15:07.012583    4364 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18241-1349/.minikube CaCertPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18241-1349/.minikube}
	I0307 10:15:07.012597    4364 buildroot.go:174] setting up certificates
	I0307 10:15:07.012602    4364 provision.go:84] configureAuth start
	I0307 10:15:07.012606    4364 provision.go:143] copyHostCerts
	I0307 10:15:07.012692    4364 exec_runner.go:144] found /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.pem, removing ...
	I0307 10:15:07.012703    4364 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.pem
	I0307 10:15:07.012816    4364 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.pem (1078 bytes)
	I0307 10:15:07.013019    4364 exec_runner.go:144] found /Users/jenkins/minikube-integration/18241-1349/.minikube/cert.pem, removing ...
	I0307 10:15:07.013023    4364 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18241-1349/.minikube/cert.pem
	I0307 10:15:07.013071    4364 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18241-1349/.minikube/cert.pem (1123 bytes)
	I0307 10:15:07.013183    4364 exec_runner.go:144] found /Users/jenkins/minikube-integration/18241-1349/.minikube/key.pem, removing ...
	I0307 10:15:07.013187    4364 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18241-1349/.minikube/key.pem
	I0307 10:15:07.013231    4364 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18241-1349/.minikube/key.pem (1679 bytes)
	I0307 10:15:07.013314    4364 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-853000 san=[127.0.0.1 localhost minikube stopped-upgrade-853000]
	I0307 10:15:07.056850    4364 provision.go:177] copyRemoteCerts
	I0307 10:15:07.056881    4364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 10:15:07.056888    4364 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/id_rsa Username:docker}
	I0307 10:15:07.088333    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 10:15:07.095062    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0307 10:15:07.101553    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 10:15:07.108657    4364 provision.go:87] duration metric: took 96.053084ms to configureAuth
	I0307 10:15:07.108665    4364 buildroot.go:189] setting minikube options for container-runtime
	I0307 10:15:07.108785    4364 config.go:182] Loaded profile config "stopped-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:15:07.108818    4364 main.go:141] libmachine: Using SSH client type: native
	I0307 10:15:07.108905    4364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003eda30] 0x1003f0290 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0307 10:15:07.108910    4364 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 10:15:07.167092    4364 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 10:15:07.167106    4364 buildroot.go:70] root file system type: tmpfs
	I0307 10:15:07.167158    4364 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 10:15:07.167210    4364 main.go:141] libmachine: Using SSH client type: native
	I0307 10:15:07.167323    4364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003eda30] 0x1003f0290 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0307 10:15:07.167357    4364 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 10:15:07.230672    4364 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 10:15:07.230731    4364 main.go:141] libmachine: Using SSH client type: native
	I0307 10:15:07.230842    4364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003eda30] 0x1003f0290 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0307 10:15:07.230852    4364 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 10:15:07.559705    4364 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 10:15:07.559719    4364 machine.go:97] duration metric: took 768.39375ms to provisionDockerMachine
	I0307 10:15:07.559726    4364 start.go:293] postStartSetup for "stopped-upgrade-853000" (driver="qemu2")
	I0307 10:15:07.559732    4364 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 10:15:07.559797    4364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 10:15:07.559805    4364 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/id_rsa Username:docker}
	I0307 10:15:07.591064    4364 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 10:15:07.592402    4364 info.go:137] Remote host: Buildroot 2021.02.12
	I0307 10:15:07.592409    4364 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18241-1349/.minikube/addons for local assets ...
	I0307 10:15:07.592492    4364 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18241-1349/.minikube/files for local assets ...
	I0307 10:15:07.592610    4364 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18241-1349/.minikube/files/etc/ssl/certs/17812.pem -> 17812.pem in /etc/ssl/certs
	I0307 10:15:07.592736    4364 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 10:15:07.595247    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/files/etc/ssl/certs/17812.pem --> /etc/ssl/certs/17812.pem (1708 bytes)
	I0307 10:15:07.601798    4364 start.go:296] duration metric: took 42.06775ms for postStartSetup
	I0307 10:15:07.601812    4364 fix.go:56] duration metric: took 20.775110917s for fixHost
	I0307 10:15:07.601845    4364 main.go:141] libmachine: Using SSH client type: native
	I0307 10:15:07.601986    4364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003eda30] 0x1003f0290 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0307 10:15:07.601991    4364 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0307 10:15:07.657847    4364 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709835307.753017337
	
	I0307 10:15:07.657855    4364 fix.go:216] guest clock: 1709835307.753017337
	I0307 10:15:07.657860    4364 fix.go:229] Guest: 2024-03-07 10:15:07.753017337 -0800 PST Remote: 2024-03-07 10:15:07.601813 -0800 PST m=+20.889528876 (delta=151.204337ms)
	I0307 10:15:07.657870    4364 fix.go:200] guest clock delta is within tolerance: 151.204337ms
	I0307 10:15:07.657875    4364 start.go:83] releasing machines lock for "stopped-upgrade-853000", held for 20.83118425s
	I0307 10:15:07.657936    4364 ssh_runner.go:195] Run: cat /version.json
	I0307 10:15:07.657945    4364 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 10:15:07.657944    4364 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/id_rsa Username:docker}
	I0307 10:15:07.657962    4364 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/id_rsa Username:docker}
	W0307 10:15:07.658549    4364 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50483: connect: connection refused
	I0307 10:15:07.658574    4364 retry.go:31] will retry after 304.222176ms: dial tcp [::1]:50483: connect: connection refused
	W0307 10:15:08.004650    4364 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0307 10:15:08.004777    4364 ssh_runner.go:195] Run: systemctl --version
	I0307 10:15:08.008033    4364 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 10:15:08.010441    4364 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 10:15:08.010487    4364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0307 10:15:08.014711    4364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0307 10:15:08.021487    4364 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 10:15:08.021498    4364 start.go:494] detecting cgroup driver to use...
	I0307 10:15:08.021598    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:15:08.030567    4364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0307 10:15:08.033880    4364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 10:15:08.039583    4364 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 10:15:08.039641    4364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 10:15:08.044164    4364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:15:08.051423    4364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 10:15:08.054547    4364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:15:08.057602    4364 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 10:15:08.060753    4364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 10:15:08.063775    4364 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 10:15:08.066146    4364 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 10:15:08.069096    4364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:15:08.130906    4364 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 10:15:08.139456    4364 start.go:494] detecting cgroup driver to use...
	I0307 10:15:08.139527    4364 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 10:15:08.145635    4364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:15:08.157235    4364 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 10:15:08.164517    4364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:15:08.169015    4364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:15:08.173590    4364 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 10:15:08.237141    4364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:15:08.243347    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:15:08.249341    4364 ssh_runner.go:195] Run: which cri-dockerd
	I0307 10:15:08.250673    4364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 10:15:08.253714    4364 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 10:15:08.258585    4364 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 10:15:08.318628    4364 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 10:15:08.394475    4364 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 10:15:08.394540    4364 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 10:15:08.399450    4364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:15:08.459373    4364 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 10:15:09.601996    4364 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.142643125s)
	I0307 10:15:09.602053    4364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 10:15:09.606778    4364 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0307 10:15:09.612854    4364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 10:15:09.617446    4364 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 10:15:09.678724    4364 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 10:15:09.739862    4364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:15:09.803628    4364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 10:15:09.809117    4364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 10:15:09.813954    4364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:15:09.877283    4364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 10:15:09.916306    4364 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 10:15:09.916380    4364 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 10:15:09.918362    4364 start.go:562] Will wait 60s for crictl version
	I0307 10:15:09.918402    4364 ssh_runner.go:195] Run: which crictl
	I0307 10:15:09.919863    4364 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 10:15:09.935827    4364 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0307 10:15:09.935907    4364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 10:15:09.952987    4364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 10:15:09.972454    4364 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0307 10:15:09.972528    4364 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0307 10:15:09.973735    4364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 10:15:09.977614    4364 kubeadm.go:877] updating cluster {Name:stopped-upgrade-853000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0307 10:15:09.977681    4364 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0307 10:15:09.977721    4364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 10:15:09.988739    4364 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 10:15:09.988747    4364 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0307 10:15:09.988794    4364 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 10:15:09.992154    4364 ssh_runner.go:195] Run: which lz4
	I0307 10:15:09.993457    4364 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0307 10:15:09.994714    4364 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0307 10:15:09.994723    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0307 10:15:10.692895    4364 docker.go:649] duration metric: took 699.494416ms to copy over tarball
	I0307 10:15:10.692960    4364 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0307 10:15:11.272173    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:11.272278    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:15:11.284749    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:15:11.284828    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:15:11.296190    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:15:11.296263    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:15:11.308463    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:15:11.308538    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:15:11.328344    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:15:11.328431    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:15:11.340696    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:15:11.340777    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:15:11.352556    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:15:11.352628    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:15:11.363924    4223 logs.go:276] 0 containers: []
	W0307 10:15:11.363935    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:15:11.363990    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:15:11.374811    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:15:11.374830    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:15:11.374836    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:15:11.417988    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:15:11.418006    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:15:11.432024    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:15:11.432037    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:15:11.449009    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:15:11.449022    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:15:11.461744    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:15:11.461757    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:15:11.476056    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:15:11.476076    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:15:11.496281    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:15:11.496311    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:15:11.538892    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:15:11.538915    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:15:11.543680    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:15:11.543693    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:15:11.563949    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:15:11.563969    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:15:11.579129    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:15:11.579145    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:15:11.597776    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:15:11.597788    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:15:11.630304    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:15:11.630327    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:15:11.661612    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:15:11.661629    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:15:11.681292    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:15:11.681308    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:15:11.720893    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:15:11.720912    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:15:11.743331    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:15:11.743346    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:15:11.989280    4364 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.296346417s)
	I0307 10:15:11.989295    4364 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0307 10:15:12.007849    4364 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 10:15:12.010719    4364 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0307 10:15:12.016041    4364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:15:12.085388    4364 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 10:15:13.674852    4364 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.589501042s)
	I0307 10:15:13.674951    4364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 10:15:13.689572    4364 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 10:15:13.689580    4364 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0307 10:15:13.689586    4364 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0307 10:15:13.695935    4364 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 10:15:13.695948    4364 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:15:13.696042    4364 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 10:15:13.696083    4364 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0307 10:15:13.696135    4364 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 10:15:13.696162    4364 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 10:15:13.696213    4364 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0307 10:15:13.696265    4364 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 10:15:13.705980    4364 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 10:15:13.706156    4364 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0307 10:15:13.706711    4364 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 10:15:13.706855    4364 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 10:15:13.706854    4364 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:15:13.706888    4364 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 10:15:13.706913    4364 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 10:15:13.706921    4364 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0307 10:15:15.624169    4364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0307 10:15:15.639268    4364 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0307 10:15:15.639303    4364 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0307 10:15:15.639361    4364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0307 10:15:15.650942    4364 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0307 10:15:15.651048    4364 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0307 10:15:15.653529    4364 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0307 10:15:15.653542    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0307 10:15:15.661490    4364 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0307 10:15:15.661499    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0307 10:15:15.687877    4364 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0307 10:15:15.728016    4364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 10:15:15.739780    4364 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0307 10:15:15.739801    4364 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 10:15:15.739856    4364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 10:15:15.749666    4364 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0307 10:15:15.769876    4364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0307 10:15:15.775193    4364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	W0307 10:15:15.775223    4364 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0307 10:15:15.775301    4364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0307 10:15:15.782479    4364 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0307 10:15:15.782500    4364 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 10:15:15.782559    4364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0307 10:15:15.792964    4364 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0307 10:15:15.792984    4364 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 10:15:15.792991    4364 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0307 10:15:15.793004    4364 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 10:15:15.793042    4364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0307 10:15:15.793042    4364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0307 10:15:15.795708    4364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0307 10:15:15.797890    4364 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0307 10:15:15.809316    4364 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0307 10:15:15.809332    4364 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0307 10:15:15.809430    4364 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0307 10:15:15.817864    4364 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0307 10:15:15.817882    4364 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 10:15:15.817931    4364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0307 10:15:15.818066    4364 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0307 10:15:15.818084    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0307 10:15:15.833582    4364 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0307 10:15:15.838075    4364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0307 10:15:15.859984    4364 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0307 10:15:15.860003    4364 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0307 10:15:15.860057    4364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0307 10:15:15.870430    4364 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0307 10:15:15.870445    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0307 10:15:15.876006    4364 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0307 10:15:15.912017    4364 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0307 10:15:16.549066    4364 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0307 10:15:16.549587    4364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:15:16.583975    4364 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0307 10:15:16.584016    4364 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:15:16.584116    4364 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:15:16.609037    4364 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0307 10:15:16.609181    4364 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0307 10:15:16.611080    4364 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0307 10:15:16.611093    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0307 10:15:16.640464    4364 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0307 10:15:16.640477    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0307 10:15:14.269085    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:16.881319    4364 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0307 10:15:16.881357    4364 cache_images.go:92] duration metric: took 3.191868375s to LoadCachedImages
	W0307 10:15:16.881406    4364 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0307 10:15:16.881412    4364 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0307 10:15:16.881470    4364 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-853000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 10:15:16.881545    4364 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 10:15:16.895128    4364 cni.go:84] Creating CNI manager for ""
	I0307 10:15:16.895140    4364 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:15:16.895145    4364 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 10:15:16.895153    4364 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-853000 NodeName:stopped-upgrade-853000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 10:15:16.895234    4364 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-853000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 10:15:16.895290    4364 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0307 10:15:16.898010    4364 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 10:15:16.898037    4364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 10:15:16.901009    4364 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0307 10:15:16.906296    4364 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 10:15:16.910971    4364 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0307 10:15:16.916356    4364 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0307 10:15:16.917622    4364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 10:15:16.921090    4364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:15:16.986619    4364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 10:15:16.992939    4364 certs.go:68] Setting up /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000 for IP: 10.0.2.15
	I0307 10:15:16.992946    4364 certs.go:194] generating shared ca certs ...
	I0307 10:15:16.992955    4364 certs.go:226] acquiring lock for ca certs: {Name:mkc8d76d77d4efc8795fd6159d984855be90a666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:15:16.993114    4364 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.key
	I0307 10:15:16.993885    4364 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/proxy-client-ca.key
	I0307 10:15:16.993891    4364 certs.go:256] generating profile certs ...
	I0307 10:15:16.994253    4364 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/client.key
	I0307 10:15:16.994275    4364 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.key.be93b4ff
	I0307 10:15:16.994287    4364 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.crt.be93b4ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0307 10:15:17.061845    4364 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.crt.be93b4ff ...
	I0307 10:15:17.061859    4364 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.crt.be93b4ff: {Name:mk58f658068efa81789e4ab6ce5c845d22fe52f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:15:17.062177    4364 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.key.be93b4ff ...
	I0307 10:15:17.062182    4364 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.key.be93b4ff: {Name:mkc44d8cd384eb86a1dd6639cb29bb73d981af5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:15:17.062334    4364 certs.go:381] copying /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.crt.be93b4ff -> /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.crt
	I0307 10:15:17.062459    4364 certs.go:385] copying /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.key.be93b4ff -> /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.key
	I0307 10:15:17.062735    4364 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/proxy-client.key
	I0307 10:15:17.062950    4364 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/1781.pem (1338 bytes)
	W0307 10:15:17.063147    4364 certs.go:480] ignoring /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/1781_empty.pem, impossibly tiny 0 bytes
	I0307 10:15:17.063156    4364 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca-key.pem (1675 bytes)
	I0307 10:15:17.063174    4364 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem (1078 bytes)
	I0307 10:15:17.063193    4364 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem (1123 bytes)
	I0307 10:15:17.063210    4364 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/key.pem (1679 bytes)
	I0307 10:15:17.063250    4364 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/files/etc/ssl/certs/17812.pem (1708 bytes)
	I0307 10:15:17.063550    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 10:15:17.070258    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 10:15:17.077294    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 10:15:17.084572    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 10:15:17.091053    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0307 10:15:17.097542    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0307 10:15:17.104767    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 10:15:17.112083    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0307 10:15:17.118929    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/1781.pem --> /usr/share/ca-certificates/1781.pem (1338 bytes)
	I0307 10:15:17.125269    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/files/etc/ssl/certs/17812.pem --> /usr/share/ca-certificates/17812.pem (1708 bytes)
	I0307 10:15:17.132440    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 10:15:17.139354    4364 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 10:15:17.144361    4364 ssh_runner.go:195] Run: openssl version
	I0307 10:15:17.146651    4364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17812.pem && ln -fs /usr/share/ca-certificates/17812.pem /etc/ssl/certs/17812.pem"
	I0307 10:15:17.149715    4364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17812.pem
	I0307 10:15:17.151157    4364 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 17:37 /usr/share/ca-certificates/17812.pem
	I0307 10:15:17.151180    4364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17812.pem
	I0307 10:15:17.152947    4364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17812.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 10:15:17.156362    4364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 10:15:17.159454    4364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:15:17.160895    4364 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 17:31 /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:15:17.160920    4364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:15:17.162904    4364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 10:15:17.165739    4364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1781.pem && ln -fs /usr/share/ca-certificates/1781.pem /etc/ssl/certs/1781.pem"
	I0307 10:15:17.169000    4364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1781.pem
	I0307 10:15:17.170479    4364 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 17:37 /usr/share/ca-certificates/1781.pem
	I0307 10:15:17.170499    4364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1781.pem
	I0307 10:15:17.172203    4364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1781.pem /etc/ssl/certs/51391683.0"
	I0307 10:15:17.175167    4364 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 10:15:17.176663    4364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0307 10:15:17.179253    4364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0307 10:15:17.181298    4364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0307 10:15:17.183414    4364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0307 10:15:17.185198    4364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0307 10:15:17.187314    4364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0307 10:15:17.189141    4364 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-853000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 10:15:17.189214    4364 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 10:15:17.199153    4364 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0307 10:15:17.202232    4364 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0307 10:15:17.202241    4364 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0307 10:15:17.202244    4364 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0307 10:15:17.202271    4364 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0307 10:15:17.205055    4364 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:15:17.205450    4364 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-853000" does not appear in /Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:15:17.205562    4364 kubeconfig.go:62] /Users/jenkins/minikube-integration/18241-1349/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-853000" cluster setting kubeconfig missing "stopped-upgrade-853000" context setting]
	I0307 10:15:17.205765    4364 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/kubeconfig: {Name:mkeef9e7922e618c2ac8219607b646aeaf5f61cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:15:17.206204    4364 kapi.go:59] client config for stopped-upgrade-853000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/client.key", CAFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1016e36a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 10:15:17.206657    4364 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0307 10:15:17.209205    4364 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-853000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0307 10:15:17.209212    4364 kubeadm.go:1153] stopping kube-system containers ...
	I0307 10:15:17.209248    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 10:15:17.223357    4364 docker.go:483] Stopping containers: [8c3d27435da1 84153db23698 2ed248da88ff 5b727911a818 02be06ae053e a9aa000cac5c 1390d083217d d14118a56b8e]
	I0307 10:15:17.223422    4364 ssh_runner.go:195] Run: docker stop 8c3d27435da1 84153db23698 2ed248da88ff 5b727911a818 02be06ae053e a9aa000cac5c 1390d083217d d14118a56b8e
	I0307 10:15:17.234131    4364 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0307 10:15:17.239907    4364 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 10:15:17.242546    4364 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 10:15:17.242551    4364 kubeadm.go:156] found existing configuration files:
	
	I0307 10:15:17.242572    4364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf
	I0307 10:15:17.245272    4364 kubeadm.go:162] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 10:15:17.245306    4364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 10:15:17.248219    4364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf
	I0307 10:15:17.250655    4364 kubeadm.go:162] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 10:15:17.250682    4364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 10:15:17.253302    4364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf
	I0307 10:15:17.256271    4364 kubeadm.go:162] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 10:15:17.256292    4364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 10:15:17.258879    4364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf
	I0307 10:15:17.261380    4364 kubeadm.go:162] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 10:15:17.261402    4364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 10:15:17.264304    4364 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 10:15:17.267215    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:15:17.293374    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:15:17.764070    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:15:17.872508    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:15:17.897176    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:15:17.930137    4364 api_server.go:52] waiting for apiserver process to appear ...
	I0307 10:15:17.930229    4364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:15:18.432316    4364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:15:18.932244    4364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:15:18.936356    4364 api_server.go:72] duration metric: took 1.006253958s to wait for apiserver process to appear ...
	I0307 10:15:18.936364    4364 api_server.go:88] waiting for apiserver healthz status ...
	I0307 10:15:18.936373    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:19.271169    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:19.271325    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:15:19.282484    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:15:19.282573    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:15:19.293441    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:15:19.293520    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:15:19.304121    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:15:19.304189    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:15:19.315104    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:15:19.315175    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:15:19.326188    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:15:19.326254    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:15:19.337123    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:15:19.337193    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:15:19.347660    4223 logs.go:276] 0 containers: []
	W0307 10:15:19.347673    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:15:19.347737    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:15:19.371539    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:15:19.371559    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:15:19.371564    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:15:19.386953    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:15:19.386965    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:15:19.401132    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:15:19.401143    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:15:19.412791    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:15:19.412819    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:15:19.425348    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:15:19.425362    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:15:19.445145    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:15:19.445161    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:15:19.465515    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:15:19.465530    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:15:19.487036    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:15:19.487048    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:15:19.504798    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:15:19.504811    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:15:19.541394    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:15:19.541405    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:15:19.553956    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:15:19.553970    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:15:19.565860    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:15:19.565872    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:15:19.592381    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:15:19.592392    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:15:19.631093    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:15:19.631103    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:15:19.635549    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:15:19.635557    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:15:19.649660    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:15:19.649672    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:15:19.667372    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:15:19.667384    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:15:22.182020    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:23.937843    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:23.937871    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:27.183716    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:27.184240    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:15:27.220542    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:15:27.220696    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:15:27.241664    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:15:27.241767    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:15:27.256351    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:15:27.256426    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:15:27.269197    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:15:27.269273    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:15:27.280082    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:15:27.280149    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:15:27.291909    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:15:27.291995    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:15:27.301989    4223 logs.go:276] 0 containers: []
	W0307 10:15:27.302001    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:15:27.302062    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:15:27.319984    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:15:27.320017    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:15:27.320023    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:15:27.333705    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:15:27.333723    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:15:27.350439    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:15:27.350453    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:15:27.362599    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:15:27.362611    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:15:27.374163    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:15:27.374176    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:15:27.378431    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:15:27.378437    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:15:27.397065    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:15:27.397076    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:15:27.414331    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:15:27.414341    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:15:27.425810    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:15:27.425821    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:15:27.440887    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:15:27.440897    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:15:27.454020    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:15:27.454031    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:15:27.468233    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:15:27.468244    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:15:27.483351    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:15:27.483363    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:15:27.500890    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:15:27.500902    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:15:27.525264    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:15:27.525275    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:15:27.537157    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:15:27.537171    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:15:27.574787    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:15:27.574816    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:15:28.938039    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:28.938085    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:30.122436    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:33.938235    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:33.938265    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:35.124553    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:35.124657    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:15:35.137383    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:15:35.137462    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:15:35.149552    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:15:35.149627    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:15:35.161260    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:15:35.161334    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:15:35.172384    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:15:35.172460    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:15:35.183396    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:15:35.183465    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:15:35.194247    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:15:35.194325    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:15:35.204485    4223 logs.go:276] 0 containers: []
	W0307 10:15:35.204497    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:15:35.204562    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:15:35.215364    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:15:35.215383    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:15:35.215390    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:15:35.234248    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:15:35.234258    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:15:35.250757    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:15:35.250768    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:15:35.269223    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:15:35.269234    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:15:35.280645    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:15:35.280657    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:15:35.292263    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:15:35.292275    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:15:35.315802    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:15:35.315813    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:15:35.320482    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:15:35.320490    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:15:35.358082    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:15:35.358096    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:15:35.380597    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:15:35.380611    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:15:35.395517    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:15:35.395528    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:15:35.412980    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:15:35.412991    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:15:35.424685    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:15:35.424698    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:15:35.462457    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:15:35.462468    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:15:35.482292    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:15:35.482305    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:15:35.499724    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:15:35.499734    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:15:35.511175    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:15:35.511188    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:15:38.938948    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:38.938991    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:38.027854    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:43.939392    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:43.939438    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:43.029997    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:43.030281    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:15:43.058342    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:15:43.058476    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:15:43.075094    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:15:43.075172    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:15:43.088013    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:15:43.088076    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:15:43.099757    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:15:43.099835    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:15:43.110340    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:15:43.110407    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:15:43.121301    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:15:43.121368    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:15:43.138258    4223 logs.go:276] 0 containers: []
	W0307 10:15:43.138271    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:15:43.138333    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:15:43.149093    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:15:43.149113    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:15:43.149119    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:15:43.184473    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:15:43.184486    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:15:43.206760    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:15:43.206768    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:15:43.218268    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:15:43.218280    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:15:43.230634    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:15:43.230646    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:15:43.242608    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:15:43.242619    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:15:43.260503    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:15:43.260514    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:15:43.272187    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:15:43.272198    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:15:43.287121    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:15:43.287133    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:15:43.298681    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:15:43.298693    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:15:43.311940    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:15:43.311955    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:15:43.324419    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:15:43.324429    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:15:43.338417    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:15:43.338430    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:15:43.352989    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:15:43.353001    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:15:43.373780    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:15:43.373789    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:15:43.392370    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:15:43.392381    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:15:43.429541    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:15:43.429549    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:15:45.935943    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:48.940099    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:48.940125    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:50.936414    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:50.936575    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:15:50.948515    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:15:50.948591    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:15:50.958592    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:15:50.958663    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:15:50.969091    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:15:50.969158    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:15:50.979744    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:15:50.979813    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:15:50.990343    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:15:50.990415    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:15:51.001055    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:15:51.001128    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:15:51.011195    4223 logs.go:276] 0 containers: []
	W0307 10:15:51.011208    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:15:51.011263    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:15:51.021805    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:15:51.021822    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:15:51.021827    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:15:51.033433    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:15:51.033444    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:15:51.050615    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:15:51.050625    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:15:51.062841    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:15:51.062853    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:15:51.076464    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:15:51.076474    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:15:51.080793    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:15:51.080802    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:15:51.094645    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:15:51.094654    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:15:51.109046    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:15:51.109057    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:15:51.123781    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:15:51.123792    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:15:51.135055    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:15:51.135067    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:15:51.170878    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:15:51.170886    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:15:51.205027    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:15:51.205037    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:15:51.219774    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:15:51.219786    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:15:51.239068    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:15:51.239079    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:15:51.251364    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:15:51.251376    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:15:51.263161    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:15:51.263176    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:15:51.274092    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:15:51.274103    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:15:53.940879    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:53.940904    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:53.797717    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:58.941872    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:58.941895    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:58.799933    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:58.800242    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:15:58.830672    4223 logs.go:276] 2 containers: [912e171f7628 3b5187de19fa]
	I0307 10:15:58.830769    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:15:58.855159    4223 logs.go:276] 2 containers: [e7399b2ae704 9378fea4f127]
	I0307 10:15:58.855244    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:15:58.873702    4223 logs.go:276] 1 containers: [acb346c0d2d8]
	I0307 10:15:58.873778    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:15:58.887539    4223 logs.go:276] 2 containers: [e50116dd9958 60e5f0621b8c]
	I0307 10:15:58.887608    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:15:58.898031    4223 logs.go:276] 1 containers: [438e893703a5]
	I0307 10:15:58.898105    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:15:58.909653    4223 logs.go:276] 2 containers: [7b2b4d88e84b 78c1d8b7fef3]
	I0307 10:15:58.909729    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:15:58.920390    4223 logs.go:276] 0 containers: []
	W0307 10:15:58.920403    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:15:58.920463    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:15:58.930490    4223 logs.go:276] 2 containers: [e8e2f52b53d1 e7d817fc8222]
	I0307 10:15:58.930507    4223 logs.go:123] Gathering logs for etcd [e7399b2ae704] ...
	I0307 10:15:58.930513    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7399b2ae704"
	I0307 10:15:58.948761    4223 logs.go:123] Gathering logs for coredns [acb346c0d2d8] ...
	I0307 10:15:58.948771    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb346c0d2d8"
	I0307 10:15:58.960163    4223 logs.go:123] Gathering logs for kube-scheduler [e50116dd9958] ...
	I0307 10:15:58.960174    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e50116dd9958"
	I0307 10:15:58.971387    4223 logs.go:123] Gathering logs for kube-controller-manager [7b2b4d88e84b] ...
	I0307 10:15:58.971402    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2b4d88e84b"
	I0307 10:15:58.993147    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:15:58.993159    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:15:59.016593    4223 logs.go:123] Gathering logs for etcd [9378fea4f127] ...
	I0307 10:15:59.016602    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9378fea4f127"
	I0307 10:15:59.030913    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:15:59.030926    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:15:59.043081    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:15:59.043094    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:15:59.081011    4223 logs.go:123] Gathering logs for kube-apiserver [912e171f7628] ...
	I0307 10:15:59.081021    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912e171f7628"
	I0307 10:15:59.096242    4223 logs.go:123] Gathering logs for kube-controller-manager [78c1d8b7fef3] ...
	I0307 10:15:59.096254    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78c1d8b7fef3"
	I0307 10:15:59.108132    4223 logs.go:123] Gathering logs for storage-provisioner [e7d817fc8222] ...
	I0307 10:15:59.108146    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7d817fc8222"
	I0307 10:15:59.118715    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:15:59.118726    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:15:59.123073    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:15:59.123079    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:15:59.158804    4223 logs.go:123] Gathering logs for kube-apiserver [3b5187de19fa] ...
	I0307 10:15:59.158815    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b5187de19fa"
	I0307 10:15:59.179109    4223 logs.go:123] Gathering logs for kube-scheduler [60e5f0621b8c] ...
	I0307 10:15:59.179119    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e5f0621b8c"
	I0307 10:15:59.193721    4223 logs.go:123] Gathering logs for kube-proxy [438e893703a5] ...
	I0307 10:15:59.193732    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 438e893703a5"
	I0307 10:15:59.205620    4223 logs.go:123] Gathering logs for storage-provisioner [e8e2f52b53d1] ...
	I0307 10:15:59.205632    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e2f52b53d1"
	I0307 10:16:01.718841    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:03.943118    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:03.943141    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:06.720975    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:06.721064    4223 kubeadm.go:591] duration metric: took 4m3.932536792s to restartPrimaryControlPlane
	W0307 10:16:06.721154    4223 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0307 10:16:06.721177    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0307 10:16:07.668176    4223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 10:16:07.673160    4223 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 10:16:07.676140    4223 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 10:16:07.678918    4223 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 10:16:07.678923    4223 kubeadm.go:156] found existing configuration files:
	
	I0307 10:16:07.678945    4223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/admin.conf
	I0307 10:16:07.681430    4223 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 10:16:07.681455    4223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 10:16:07.684216    4223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/kubelet.conf
	I0307 10:16:07.686834    4223 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 10:16:07.686859    4223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 10:16:07.689408    4223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/controller-manager.conf
	I0307 10:16:07.692382    4223 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 10:16:07.692408    4223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 10:16:07.695175    4223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/scheduler.conf
	I0307 10:16:07.697702    4223 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 10:16:07.697726    4223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 10:16:07.700451    4223 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 10:16:07.719257    4223 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0307 10:16:07.719389    4223 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 10:16:07.773563    4223 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 10:16:07.773620    4223 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 10:16:07.773685    4223 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 10:16:07.823709    4223 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 10:16:07.827952    4223 out.go:204]   - Generating certificates and keys ...
	I0307 10:16:07.827992    4223 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 10:16:07.828037    4223 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 10:16:07.828074    4223 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0307 10:16:07.828108    4223 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0307 10:16:07.828146    4223 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0307 10:16:07.828178    4223 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0307 10:16:07.828207    4223 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0307 10:16:07.828236    4223 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0307 10:16:07.828277    4223 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0307 10:16:07.828314    4223 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0307 10:16:07.828343    4223 kubeadm.go:309] [certs] Using the existing "sa" key
	I0307 10:16:07.828374    4223 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 10:16:07.853310    4223 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 10:16:08.047609    4223 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 10:16:08.125670    4223 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 10:16:08.196261    4223 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 10:16:08.226684    4223 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 10:16:08.228561    4223 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 10:16:08.228600    4223 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 10:16:08.320400    4223 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 10:16:08.944736    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:08.944764    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:08.324518    4223 out.go:204]   - Booting up control plane ...
	I0307 10:16:08.324613    4223 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 10:16:08.324661    4223 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 10:16:08.324694    4223 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 10:16:08.324740    4223 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 10:16:08.324825    4223 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 10:16:12.325090    4223 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.005091 seconds
	I0307 10:16:12.325149    4223 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 10:16:12.329092    4223 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 10:16:12.841484    4223 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 10:16:12.841678    4223 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-064000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 10:16:13.348591    4223 kubeadm.go:309] [bootstrap-token] Using token: li2qax.cp8x61rj2vxpi2xh
	I0307 10:16:13.354908    4223 out.go:204]   - Configuring RBAC rules ...
	I0307 10:16:13.354984    4223 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 10:16:13.355057    4223 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 10:16:13.357851    4223 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 10:16:13.359134    4223 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 10:16:13.360203    4223 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 10:16:13.361232    4223 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 10:16:13.365153    4223 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 10:16:13.547157    4223 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 10:16:13.753436    4223 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 10:16:13.754011    4223 kubeadm.go:309] 
	I0307 10:16:13.754046    4223 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 10:16:13.754051    4223 kubeadm.go:309] 
	I0307 10:16:13.754091    4223 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 10:16:13.754108    4223 kubeadm.go:309] 
	I0307 10:16:13.754125    4223 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 10:16:13.754168    4223 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 10:16:13.754201    4223 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 10:16:13.754206    4223 kubeadm.go:309] 
	I0307 10:16:13.754238    4223 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 10:16:13.754243    4223 kubeadm.go:309] 
	I0307 10:16:13.754269    4223 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 10:16:13.754273    4223 kubeadm.go:309] 
	I0307 10:16:13.754304    4223 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 10:16:13.754344    4223 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 10:16:13.754399    4223 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 10:16:13.754405    4223 kubeadm.go:309] 
	I0307 10:16:13.754459    4223 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 10:16:13.754501    4223 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 10:16:13.754504    4223 kubeadm.go:309] 
	I0307 10:16:13.754564    4223 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token li2qax.cp8x61rj2vxpi2xh \
	I0307 10:16:13.754624    4223 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4f3f709732e35797580c9b8d11f82ef6c52734bdc7940106dd5836141654d720 \
	I0307 10:16:13.754637    4223 kubeadm.go:309] 	--control-plane 
	I0307 10:16:13.754642    4223 kubeadm.go:309] 
	I0307 10:16:13.754704    4223 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 10:16:13.754707    4223 kubeadm.go:309] 
	I0307 10:16:13.754748    4223 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token li2qax.cp8x61rj2vxpi2xh \
	I0307 10:16:13.754818    4223 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4f3f709732e35797580c9b8d11f82ef6c52734bdc7940106dd5836141654d720 
	I0307 10:16:13.754873    4223 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 10:16:13.754879    4223 cni.go:84] Creating CNI manager for ""
	I0307 10:16:13.754887    4223 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:16:13.762532    4223 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0307 10:16:13.766579    4223 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0307 10:16:13.769558    4223 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0307 10:16:13.776816    4223 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 10:16:13.776886    4223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-064000 minikube.k8s.io/updated_at=2024_03_07T10_16_13_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c2be9b5c96c7962d271300acacf405d2402b272f minikube.k8s.io/name=running-upgrade-064000 minikube.k8s.io/primary=true
	I0307 10:16:13.776889    4223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 10:16:13.787598    4223 ops.go:34] apiserver oom_adj: -16
	I0307 10:16:13.822210    4223 kubeadm.go:1106] duration metric: took 45.370792ms to wait for elevateKubeSystemPrivileges
	W0307 10:16:13.822249    4223 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 10:16:13.822254    4223 kubeadm.go:393] duration metric: took 4m11.047378083s to StartCluster
	I0307 10:16:13.822262    4223 settings.go:142] acquiring lock: {Name:mke72688bb63f8128eac153bbf90929d78ec9d29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:16:13.822563    4223 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:16:13.823044    4223 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/kubeconfig: {Name:mkeef9e7922e618c2ac8219607b646aeaf5f61cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:16:13.823239    4223 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:16:13.826610    4223 out.go:177] * Verifying Kubernetes components...
	I0307 10:16:13.823291    4223 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 10:16:13.823405    4223 config.go:182] Loaded profile config "running-upgrade-064000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:16:13.834501    4223 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-064000"
	I0307 10:16:13.834517    4223 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-064000"
	W0307 10:16:13.834521    4223 addons.go:243] addon storage-provisioner should already be in state true
	I0307 10:16:13.834537    4223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:16:13.834538    4223 host.go:66] Checking if "running-upgrade-064000" exists ...
	I0307 10:16:13.834521    4223 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-064000"
	I0307 10:16:13.834632    4223 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-064000"
	I0307 10:16:13.836033    4223 kapi.go:59] client config for running-upgrade-064000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/running-upgrade-064000/client.key", CAFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10225f6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 10:16:13.836242    4223 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-064000"
	W0307 10:16:13.836248    4223 addons.go:243] addon default-storageclass should already be in state true
	I0307 10:16:13.836254    4223 host.go:66] Checking if "running-upgrade-064000" exists ...
	I0307 10:16:13.840531    4223 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:16:13.946769    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:13.946794    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:13.843543    4223 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 10:16:13.843550    4223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 10:16:13.843557    4223 sshutil.go:53] new ssh client: &{IP:localhost Port:50273 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/running-upgrade-064000/id_rsa Username:docker}
	I0307 10:16:13.844303    4223 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 10:16:13.844307    4223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 10:16:13.844311    4223 sshutil.go:53] new ssh client: &{IP:localhost Port:50273 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/running-upgrade-064000/id_rsa Username:docker}
	I0307 10:16:13.924513    4223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 10:16:13.930599    4223 api_server.go:52] waiting for apiserver process to appear ...
	I0307 10:16:13.930654    4223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:16:13.934721    4223 api_server.go:72] duration metric: took 111.475041ms to wait for apiserver process to appear ...
	I0307 10:16:13.934728    4223 api_server.go:88] waiting for apiserver healthz status ...
	I0307 10:16:13.934735    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:13.980753    4223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 10:16:13.982533    4223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 10:16:18.948922    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:18.949230    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:16:18.986383    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:16:18.986536    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:16:19.008427    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:16:19.008545    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:16:19.023726    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:16:19.023810    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:16:19.036326    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:16:19.036414    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:16:19.047190    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:16:19.047263    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:16:19.057726    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:16:19.057794    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:16:19.067587    4364 logs.go:276] 0 containers: []
	W0307 10:16:19.067605    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:16:19.067670    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:16:19.077602    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:16:19.077632    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:16:19.077640    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:16:19.218881    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:16:19.218892    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:16:19.246630    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:16:19.246646    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:16:19.260301    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:16:19.260311    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:16:19.272050    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:16:19.272062    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:16:19.283767    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:16:19.283778    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:16:19.310869    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:16:19.310880    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:16:19.326453    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:16:19.326463    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:16:19.337871    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:16:19.337881    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:16:19.350016    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:16:19.350029    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:16:19.371088    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:16:19.371099    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:16:19.410369    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:16:19.410378    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:16:19.421265    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:16:19.421278    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:16:19.434989    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:16:19.434999    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:16:19.439632    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:16:19.439640    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:16:19.455162    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:16:19.455173    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:16:19.470547    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:16:19.470567    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:16:18.936753    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:18.936859    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:21.990616    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:23.937410    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:23.937439    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:26.991423    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:26.991575    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:16:27.006556    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:16:27.006644    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:16:27.019894    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:16:27.019976    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:16:27.030690    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:16:27.030764    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:16:27.041297    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:16:27.041369    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:16:27.051895    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:16:27.051963    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:16:27.063008    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:16:27.063083    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:16:27.073154    4364 logs.go:276] 0 containers: []
	W0307 10:16:27.073165    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:16:27.073223    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:16:27.083641    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:16:27.083659    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:16:27.083665    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:16:27.095758    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:16:27.095770    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:16:27.108609    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:16:27.108621    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:16:27.122787    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:16:27.122797    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:16:27.136828    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:16:27.136838    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:16:27.162507    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:16:27.162522    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:16:27.204510    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:16:27.204524    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:16:27.220206    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:16:27.220219    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:16:27.231750    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:16:27.231764    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:16:27.243632    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:16:27.243642    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:16:27.260530    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:16:27.260547    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:16:27.276014    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:16:27.276031    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:16:27.293584    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:16:27.293594    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:16:27.333874    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:16:27.333888    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:16:27.359014    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:16:27.359026    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:16:27.373339    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:16:27.373356    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:16:27.378284    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:16:27.378294    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:16:29.895859    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:28.937724    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:28.937747    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:34.898111    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:34.898321    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:16:34.924040    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:16:34.924163    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:16:34.945763    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:16:34.945837    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:16:34.958685    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:16:34.958746    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:16:34.970252    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:16:34.970327    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:16:34.981196    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:16:34.981263    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:16:34.991388    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:16:34.991447    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:16:35.004721    4364 logs.go:276] 0 containers: []
	W0307 10:16:35.004733    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:16:35.004793    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:16:35.014837    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:16:35.014860    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:16:35.014865    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:16:35.040218    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:16:35.040227    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:16:35.052385    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:16:35.052396    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:16:35.066507    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:16:35.066518    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:16:35.081773    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:16:35.081783    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:16:35.095549    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:16:35.095563    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:16:35.113502    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:16:35.113516    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:16:35.127340    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:16:35.127350    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:16:35.138913    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:16:35.138923    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:16:35.149816    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:16:35.149827    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:16:35.188061    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:16:35.188070    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:16:35.192097    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:16:35.192107    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:16:35.225890    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:16:35.225901    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:16:35.252398    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:16:35.252409    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:16:35.270339    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:16:35.270349    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:16:35.282271    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:16:35.282286    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:16:35.295948    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:16:35.295959    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:16:33.938147    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:33.938193    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:37.817773    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:38.938901    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:38.938941    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:43.939882    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:43.939926    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0307 10:16:44.338360    4223 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0307 10:16:44.341627    4223 out.go:177] * Enabled addons: storage-provisioner
	I0307 10:16:42.819926    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:42.820141    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:16:42.838308    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:16:42.838411    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:16:42.851946    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:16:42.852042    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:16:42.866571    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:16:42.866644    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:16:42.878287    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:16:42.878351    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:16:42.888706    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:16:42.888771    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:16:42.899570    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:16:42.899642    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:16:42.910142    4364 logs.go:276] 0 containers: []
	W0307 10:16:42.910154    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:16:42.910227    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:16:42.920496    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:16:42.920511    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:16:42.920516    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:16:42.934526    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:16:42.934537    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:16:42.950192    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:16:42.950201    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:16:42.964770    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:16:42.964784    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:16:42.975837    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:16:42.975849    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:16:42.980543    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:16:42.980550    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:16:43.005519    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:16:43.005530    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:16:43.017133    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:16:43.017144    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:16:43.028940    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:16:43.028951    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:16:43.052741    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:16:43.052757    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:16:43.092020    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:16:43.092033    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:16:43.130279    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:16:43.130293    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:16:43.143936    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:16:43.143947    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:16:43.157089    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:16:43.157099    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:16:43.170799    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:16:43.170812    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:16:43.182322    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:16:43.182335    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:16:43.199570    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:16:43.199582    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:16:45.716892    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:44.350551    4223 addons.go:505] duration metric: took 30.528298041s for enable addons: enabled=[storage-provisioner]
	I0307 10:16:50.719097    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:50.719313    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:16:50.743109    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:16:50.743214    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:16:50.756801    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:16:50.756877    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:16:50.768737    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:16:50.768810    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:16:50.778576    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:16:50.778651    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:16:50.788864    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:16:50.788931    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:16:50.799141    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:16:50.799212    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:16:50.809463    4364 logs.go:276] 0 containers: []
	W0307 10:16:50.809476    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:16:50.809547    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:16:50.820431    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:16:50.820448    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:16:50.820454    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:16:50.856432    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:16:50.856446    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:16:50.870437    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:16:50.870447    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:16:50.884914    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:16:50.884924    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:16:50.895943    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:16:50.895955    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:16:50.921876    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:16:50.921889    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:16:50.960419    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:16:50.960427    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:16:50.975802    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:16:50.975815    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:16:50.998226    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:16:50.998237    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:16:51.017442    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:16:51.017452    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:16:51.036157    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:16:51.036167    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:16:51.040197    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:16:51.040210    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:16:51.052625    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:16:51.052635    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:16:51.063694    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:16:51.063704    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:16:51.079558    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:16:51.079570    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:16:51.091016    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:16:51.091029    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:16:51.104305    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:16:51.104315    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:16:48.941054    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:48.941077    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:53.630733    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:53.942505    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:53.942552    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:58.632964    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:58.633208    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:16:58.655254    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:16:58.655362    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:16:58.670653    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:16:58.670737    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:16:58.683195    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:16:58.683266    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:16:58.694082    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:16:58.694152    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:16:58.704863    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:16:58.704939    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:16:58.715316    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:16:58.715389    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:16:58.727171    4364 logs.go:276] 0 containers: []
	W0307 10:16:58.727185    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:16:58.727257    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:16:58.738088    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:16:58.738111    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:16:58.738118    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:16:58.778084    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:16:58.778097    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:16:58.790275    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:16:58.790291    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:16:58.805141    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:16:58.805152    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:16:58.817145    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:16:58.817155    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:16:58.832704    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:16:58.832718    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:16:58.837003    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:16:58.837012    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:16:58.854921    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:16:58.854931    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:16:58.879773    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:16:58.879787    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:16:58.893460    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:16:58.893473    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:16:58.908295    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:16:58.908306    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:16:58.925463    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:16:58.925473    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:16:58.966533    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:16:58.966549    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:16:58.980276    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:16:58.980290    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:16:58.991495    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:16:58.991507    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:16:59.015572    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:16:59.015588    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:16:59.027416    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:16:59.027429    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:17:01.541310    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:58.944481    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:58.944515    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:06.543476    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:06.543873    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:06.579102    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:17:06.579247    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:06.600009    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:17:06.600101    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:06.614597    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:17:06.614687    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:06.627043    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:17:06.627115    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:06.637332    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:17:06.637405    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:06.651109    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:17:06.651185    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:06.661487    4364 logs.go:276] 0 containers: []
	W0307 10:17:06.661497    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:06.661548    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:06.679773    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:17:06.679792    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:06.679799    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:06.685890    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:17:06.685899    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:17:06.703955    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:17:06.703967    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:17:06.715302    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:06.715313    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:17:03.946625    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:03.946664    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:06.754894    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:17:06.754904    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:17:06.770507    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:17:06.770517    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:17:06.803636    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:17:06.803653    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:17:06.815144    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:17:06.815158    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:06.826936    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:06.826948    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:06.861133    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:17:06.861147    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:17:06.875858    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:17:06.875869    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:17:06.889645    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:06.889661    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:06.914983    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:17:06.914992    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:17:06.926354    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:17:06.926365    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:17:06.947490    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:17:06.947501    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:17:06.958652    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:17:06.958662    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:17:06.973453    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:17:06.973464    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:17:09.486709    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:08.948777    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:08.948820    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:14.488978    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:14.489162    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:14.507417    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:17:14.507518    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:14.524879    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:17:14.524955    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:14.537329    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:17:14.537396    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:14.547829    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:17:14.547900    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:14.558322    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:17:14.558388    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:14.568733    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:17:14.568801    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:14.580384    4364 logs.go:276] 0 containers: []
	W0307 10:17:14.580396    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:14.580454    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:14.590913    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:17:14.590930    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:14.590936    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:14.629061    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:17:14.629075    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:17:14.643092    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:17:14.643102    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:17:14.655511    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:17:14.655520    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:17:14.674585    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:17:14.674600    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:17:14.688294    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:14.688304    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:14.711566    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:17:14.711577    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:14.723401    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:17:14.723411    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:17:14.737232    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:17:14.737241    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:17:14.749296    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:17:14.749306    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:17:14.761031    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:14.761042    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:17:14.798912    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:14.798925    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:14.803309    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:17:14.803316    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:17:14.828531    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:17:14.828542    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:17:14.845674    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:17:14.845686    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:17:14.856616    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:17:14.856628    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:17:14.871388    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:17:14.871399    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:17:13.950952    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:13.951113    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:13.966222    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:17:13.966303    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:13.977103    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:17:13.977170    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:13.988116    4223 logs.go:276] 2 containers: [8f40abedda95 b646ef99863b]
	I0307 10:17:13.988189    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:14.005692    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:17:14.005764    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:14.016923    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:17:14.016996    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:14.034372    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:17:14.034455    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:14.056723    4223 logs.go:276] 0 containers: []
	W0307 10:17:14.056741    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:14.056814    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:14.069659    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:17:14.069677    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:17:14.069683    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:17:14.088664    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:14.088674    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:14.112269    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:14.112277    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:14.116893    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:17:14.116899    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:17:14.131326    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:17:14.131343    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:17:14.142910    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:17:14.142922    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:17:14.154268    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:17:14.154280    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:17:14.169135    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:17:14.169148    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:17:14.180307    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:17:14.180317    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:17:14.192124    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:17:14.192135    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:14.203502    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:14.203514    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:17:14.237030    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:17:14.237124    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:17:14.238065    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:14.238072    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:14.273124    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:17:14.273142    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:17:14.287819    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:17:14.287829    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:17:14.287858    4223 out.go:239] X Problems detected in kubelet:
	W0307 10:17:14.287863    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:17:14.287867    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:17:14.287891    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:17:14.287896    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:17:17.384156    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:22.386452    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:22.386609    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:22.398625    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:17:22.398705    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:22.408942    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:17:22.409014    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:22.419300    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:17:22.419369    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:22.431392    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:17:22.431469    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:22.445748    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:17:22.445821    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:22.456044    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:17:22.456111    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:22.466190    4364 logs.go:276] 0 containers: []
	W0307 10:17:22.466202    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:22.466261    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:22.476854    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:17:22.476875    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:17:22.476881    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:17:22.488165    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:17:22.488178    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:17:22.502448    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:17:22.502458    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:17:22.517107    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:17:22.517119    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:17:22.534778    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:17:22.534788    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:17:22.548267    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:17:22.548281    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:17:22.559596    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:17:22.559606    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:22.572303    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:17:22.572315    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:17:22.590942    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:17:22.590956    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:17:22.609164    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:22.609173    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:22.633736    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:22.633752    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:22.637763    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:17:22.637769    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:17:22.648834    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:17:22.648845    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:17:22.674115    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:17:22.674135    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:17:22.689748    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:17:22.689759    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:17:22.708909    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:22.708921    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:17:22.746135    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:22.746149    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:25.283212    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:24.291783    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:30.285367    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:30.285607    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:30.309256    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:17:30.309354    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:30.323594    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:17:30.323675    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:30.335860    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:17:30.335928    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:30.346314    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:17:30.346387    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:30.356809    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:17:30.356883    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:30.371867    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:17:30.371933    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:30.381822    4364 logs.go:276] 0 containers: []
	W0307 10:17:30.381834    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:30.381889    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:30.392655    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:17:30.392673    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:30.392680    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:30.427306    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:17:30.427317    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:17:30.438493    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:17:30.438505    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:17:30.454326    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:17:30.454338    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:30.466296    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:17:30.466306    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:17:30.487557    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:17:30.487567    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:17:30.500592    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:17:30.500608    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:17:30.513188    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:17:30.513198    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:17:30.530043    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:17:30.530053    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:17:30.541402    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:30.541413    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:17:30.579227    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:17:30.579235    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:17:30.593171    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:17:30.593186    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:17:30.607037    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:30.607047    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:30.630448    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:30.630455    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:30.634956    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:17:30.634963    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:17:30.660369    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:17:30.660379    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:17:30.673996    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:17:30.674007    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:17:29.294125    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:29.294249    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:29.306450    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:17:29.306524    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:29.320220    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:17:29.320295    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:29.330851    4223 logs.go:276] 2 containers: [8f40abedda95 b646ef99863b]
	I0307 10:17:29.330923    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:29.341262    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:17:29.341330    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:29.352256    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:17:29.352333    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:29.362874    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:17:29.362938    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:29.373094    4223 logs.go:276] 0 containers: []
	W0307 10:17:29.373104    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:29.373155    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:29.390619    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:17:29.390633    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:17:29.390638    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:17:29.404637    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:17:29.404650    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:17:29.418640    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:17:29.418651    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:17:29.429788    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:17:29.429804    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:17:29.441040    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:29.441050    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:29.465471    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:17:29.465478    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:29.477468    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:17:29.477479    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:17:29.488871    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:29.488882    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:17:29.523842    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:17:29.523936    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:17:29.524890    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:29.524895    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:29.529115    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:29.529120    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:29.573917    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:17:29.573929    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:17:29.589783    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:17:29.589795    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:17:29.601565    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:17:29.601579    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:17:29.622309    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:17:29.622319    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:17:29.622344    4223 out.go:239] X Problems detected in kubelet:
	W0307 10:17:29.622348    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:17:29.622352    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:17:29.622356    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:17:29.622359    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:17:33.186988    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:38.188737    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:38.188980    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:38.215822    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:17:38.215914    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:38.230480    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:17:38.230559    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:38.242218    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:17:38.242293    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:38.253364    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:17:38.253435    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:38.264369    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:17:38.264439    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:38.274963    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:17:38.275029    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:38.285404    4364 logs.go:276] 0 containers: []
	W0307 10:17:38.285414    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:38.285474    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:38.295351    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:17:38.295367    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:17:38.295372    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:17:38.311450    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:17:38.311461    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:17:38.325865    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:17:38.325876    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:17:38.346572    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:17:38.346583    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:38.358609    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:17:38.358621    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:17:38.373318    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:17:38.373329    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:17:38.388777    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:17:38.388788    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:17:38.400496    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:17:38.400508    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:17:38.412100    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:17:38.412112    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:17:38.423648    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:38.423660    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:38.427987    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:17:38.427995    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:17:38.452966    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:17:38.452976    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:17:38.467520    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:17:38.467531    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:17:38.478801    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:17:38.478812    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:17:38.489504    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:38.489520    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:38.514635    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:38.514643    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:17:38.552633    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:38.552643    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:41.090614    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:39.625301    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:46.092774    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:46.093058    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:46.121677    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:17:46.121797    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:46.139209    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:17:46.139308    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:46.152366    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:17:46.152441    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:46.164453    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:17:46.164524    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:46.174520    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:17:46.174593    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:46.188909    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:17:46.188982    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:46.199989    4364 logs.go:276] 0 containers: []
	W0307 10:17:46.200005    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:46.200068    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:46.210775    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:17:46.210796    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:46.210801    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:46.234673    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:17:46.234683    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:17:46.253561    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:17:46.253571    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:17:46.267472    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:17:46.267482    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:17:46.278632    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:17:46.278643    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:17:46.294284    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:17:46.294296    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:17:46.310802    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:17:46.310815    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:17:46.322196    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:46.322206    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:46.359277    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:17:46.359288    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:17:46.376726    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:17:46.376737    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:46.388473    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:17:46.388484    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:17:46.402185    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:17:46.402195    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:17:46.413370    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:46.413381    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:17:46.451625    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:46.451634    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:46.455785    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:17:46.455791    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:17:46.480481    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:17:46.480491    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:17:46.495444    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:17:46.495457    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:17:44.626488    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:44.626738    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:44.653101    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:17:44.653219    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:44.670687    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:17:44.670769    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:44.683946    4223 logs.go:276] 2 containers: [8f40abedda95 b646ef99863b]
	I0307 10:17:44.684008    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:44.695518    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:17:44.695588    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:44.705818    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:17:44.705880    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:44.716512    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:17:44.716583    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:44.726740    4223 logs.go:276] 0 containers: []
	W0307 10:17:44.726755    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:44.726810    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:44.737431    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:17:44.737447    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:17:44.737452    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:17:44.749244    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:17:44.749258    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:17:44.763946    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:17:44.763957    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:17:44.775528    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:17:44.775541    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:17:44.787098    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:17:44.787110    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:44.798369    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:44.798378    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:44.803177    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:17:44.803182    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:17:44.817832    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:17:44.817843    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:17:44.836932    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:17:44.836946    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:17:44.848652    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:17:44.848666    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:17:44.866674    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:44.866686    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:44.890602    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:44.890611    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:17:44.923775    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:17:44.923870    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:17:44.924823    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:44.924831    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:44.959701    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:17:44.959716    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:17:44.959745    4223 out.go:239] X Problems detected in kubelet:
	W0307 10:17:44.959752    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:17:44.959755    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:17:44.959759    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:17:44.959762    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:17:49.011502    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:54.013717    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:54.013981    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:54.034361    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:17:54.034457    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:54.048766    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:17:54.048850    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:54.063886    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:17:54.063952    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:54.074798    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:17:54.074890    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:54.085745    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:17:54.085812    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:54.096662    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:17:54.096738    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:54.106490    4364 logs.go:276] 0 containers: []
	W0307 10:17:54.106501    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:54.106562    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:54.117021    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:17:54.117045    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:17:54.117051    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:17:54.131508    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:17:54.131519    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:17:54.142260    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:17:54.142272    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:17:54.156859    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:17:54.156870    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:17:54.168965    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:17:54.168975    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:17:54.180150    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:54.180162    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:17:54.218877    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:54.218886    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:54.254586    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:17:54.254596    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:17:54.280159    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:17:54.280171    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:17:54.294409    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:17:54.294420    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:17:54.309612    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:54.309626    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:54.333223    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:17:54.333231    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:17:54.344767    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:17:54.344777    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:17:54.362190    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:17:54.362201    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:54.374055    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:54.374066    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:54.378289    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:17:54.378297    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:17:54.392045    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:17:54.392056    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:17:54.963612    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:56.908019    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:59.965853    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:59.966000    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:59.982370    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:17:59.982464    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:59.994584    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:17:59.994656    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:00.006069    4223 logs.go:276] 2 containers: [8f40abedda95 b646ef99863b]
	I0307 10:18:00.006138    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:00.016628    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:18:00.016702    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:00.027213    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:18:00.027276    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:00.037929    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:18:00.038008    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:00.048034    4223 logs.go:276] 0 containers: []
	W0307 10:18:00.048045    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:00.048105    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:00.058943    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:18:00.058959    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:00.058965    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:00.063348    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:00.063356    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:00.099137    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:18:00.099150    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:18:00.111760    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:18:00.111772    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:18:00.124687    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:18:00.124698    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:18:00.139065    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:18:00.139078    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:18:00.151082    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:00.151095    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:00.173945    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:18:00.173955    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:00.188691    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:00.188702    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:18:00.223565    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:18:00.223658    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:18:00.224577    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:18:00.224583    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:18:00.238946    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:18:00.238960    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:18:00.252834    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:18:00.252845    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:18:00.265164    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:18:00.265176    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:18:00.282236    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:18:00.282246    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:18:00.282270    4223 out.go:239] X Problems detected in kubelet:
	W0307 10:18:00.282275    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:18:00.282278    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:18:00.282283    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:18:00.282286    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:18:01.910535    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:01.910717    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:01.931262    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:18:01.931374    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:01.944300    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:18:01.944372    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:01.955467    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:18:01.955537    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:01.965607    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:18:01.965686    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:01.975699    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:18:01.975758    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:01.992819    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:18:01.992885    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:02.002891    4364 logs.go:276] 0 containers: []
	W0307 10:18:02.002905    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:02.002969    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:02.013717    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:18:02.013736    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:18:02.013741    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:18:02.036468    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:18:02.036479    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:18:02.047651    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:18:02.047662    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:18:02.062315    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:18:02.062330    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:18:02.075766    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:02.075777    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:02.111769    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:18:02.111783    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:18:02.132019    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:18:02.132028    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:18:02.143865    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:18:02.143877    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:18:02.155425    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:02.155436    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:02.159748    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:18:02.159759    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:18:02.185066    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:18:02.185077    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:18:02.199701    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:18:02.199713    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:02.211910    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:02.211920    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:02.235054    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:02.235063    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:18:02.273180    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:18:02.273198    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:18:02.285799    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:18:02.285811    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:18:02.301097    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:18:02.301110    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:18:04.821266    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:09.823782    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:09.824092    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:09.853395    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:18:09.853521    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:09.872837    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:18:09.872930    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:09.886353    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:18:09.886424    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:09.898898    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:18:09.898973    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:09.909277    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:18:09.909348    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:09.919978    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:18:09.920047    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:09.932026    4364 logs.go:276] 0 containers: []
	W0307 10:18:09.932039    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:09.932102    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:09.942396    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:18:09.942414    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:18:09.942419    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:18:09.963800    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:18:09.963810    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:18:09.975846    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:09.975857    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:09.999893    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:18:09.999902    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:10.011561    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:18:10.011572    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:18:10.025729    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:18:10.025738    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:18:10.040089    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:18:10.040098    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:18:10.051633    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:18:10.051644    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:18:10.063768    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:18:10.063779    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:18:10.075079    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:10.075092    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:10.079702    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:10.079709    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:10.115625    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:18:10.115636    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:18:10.127634    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:18:10.127646    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:18:10.144591    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:10.144601    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:18:10.183341    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:18:10.183355    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:18:10.208607    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:18:10.208619    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:18:10.225446    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:18:10.225457    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:18:10.286131    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:12.745238    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:15.287087    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:15.287255    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:15.298826    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:18:15.298900    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:15.314385    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:18:15.314462    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:15.326676    4223 logs.go:276] 2 containers: [8f40abedda95 b646ef99863b]
	I0307 10:18:15.326755    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:15.337596    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:18:15.337661    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:15.349009    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:18:15.349082    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:15.359905    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:18:15.359974    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:15.369928    4223 logs.go:276] 0 containers: []
	W0307 10:18:15.369941    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:15.369999    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:15.380172    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:18:15.380190    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:15.380196    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:15.385034    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:15.385044    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:15.420164    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:18:15.420178    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:18:15.434711    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:18:15.434722    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:18:15.446693    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:18:15.446705    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:18:15.458130    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:18:15.458138    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:18:15.478732    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:18:15.478744    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:15.490254    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:15.490265    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:18:15.524451    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:18:15.524548    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:18:15.525478    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:18:15.525485    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:18:15.544408    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:18:15.544419    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:18:15.559796    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:18:15.559806    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:18:15.572243    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:18:15.572255    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:18:15.593903    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:15.593913    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:15.618595    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:18:15.618607    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:18:15.618630    4223 out.go:239] X Problems detected in kubelet:
	W0307 10:18:15.618634    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:18:15.618638    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:18:15.618642    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:18:15.618645    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:18:17.747308    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:17.747547    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:17.773833    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:18:17.773990    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:17.791174    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:18:17.791272    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:17.804694    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:18:17.804763    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:17.816216    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:18:17.816280    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:17.830402    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:18:17.830469    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:17.840672    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:18:17.840733    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:17.850466    4364 logs.go:276] 0 containers: []
	W0307 10:18:17.850477    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:17.850527    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:17.861111    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:18:17.861128    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:17.861134    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:18:17.900213    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:17.900223    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:17.945087    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:18:17.945100    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:18:17.958994    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:18:17.959007    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:18:17.970580    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:17.970591    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:17.974642    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:18:17.974649    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:18:18.002954    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:18:18.002964    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:18:18.018386    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:18:18.018396    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:18:18.033252    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:18:18.033265    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:18:18.044844    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:18:18.044855    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:18:18.062087    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:18:18.062098    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:18:18.073623    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:18.073633    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:18.098051    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:18:18.098061    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:18:18.111443    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:18:18.111453    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:18:18.122805    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:18:18.122817    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:18:18.136770    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:18:18.136781    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:18:18.147797    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:18:18.147809    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:20.661933    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:25.664417    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:25.664535    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:25.683513    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:18:25.683610    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:25.699250    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:18:25.699327    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:25.710861    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:18:25.710932    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:25.722682    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:18:25.722764    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:25.733668    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:18:25.733738    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:25.744002    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:18:25.744071    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:25.754137    4364 logs.go:276] 0 containers: []
	W0307 10:18:25.754148    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:25.754208    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:25.764592    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:18:25.764610    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:25.764616    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:25.768733    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:18:25.768741    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:18:25.783083    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:18:25.783094    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:25.794365    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:25.794377    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:18:25.830824    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:18:25.830832    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:18:25.844360    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:18:25.844371    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:18:25.855949    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:18:25.855961    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:18:25.869085    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:18:25.869094    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:18:25.885899    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:18:25.885909    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:18:25.897461    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:25.897475    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:25.934103    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:18:25.934114    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:18:25.960042    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:18:25.960053    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:18:25.980520    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:18:25.980531    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:18:25.991807    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:18:25.991817    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:18:26.012763    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:18:26.012778    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:18:26.030715    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:18:26.030726    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:18:26.041547    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:26.041558    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:25.622500    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:28.565967    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:30.624749    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:30.625189    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:30.664486    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:18:30.664625    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:30.686476    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:18:30.686577    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:30.702444    4223 logs.go:276] 4 containers: [30ac341fe864 78686b00c83b 8f40abedda95 b646ef99863b]
	I0307 10:18:30.702525    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:30.714800    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:18:30.714874    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:30.725834    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:18:30.725911    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:30.736324    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:18:30.736409    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:30.746907    4223 logs.go:276] 0 containers: []
	W0307 10:18:30.746923    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:30.746978    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:30.757653    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:18:30.757669    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:18:30.757676    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:18:30.771346    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:18:30.771357    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:18:30.782865    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:30.782877    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:18:30.816704    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:18:30.816797    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:18:30.817696    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:30.817702    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:30.853340    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:18:30.853352    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:18:30.865756    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:18:30.865768    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:30.877399    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:18:30.877414    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:18:30.891788    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:18:30.891801    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:18:30.903559    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:18:30.903573    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:18:30.918072    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:18:30.918082    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:18:30.930014    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:18:30.930028    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:18:30.948661    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:30.948670    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:30.971843    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:30.971851    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:30.976332    4223 logs.go:123] Gathering logs for coredns [30ac341fe864] ...
	I0307 10:18:30.976340    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30ac341fe864"
	I0307 10:18:30.987727    4223 logs.go:123] Gathering logs for coredns [78686b00c83b] ...
	I0307 10:18:30.987742    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78686b00c83b"
	I0307 10:18:30.999188    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:18:30.999204    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:18:30.999230    4223 out.go:239] X Problems detected in kubelet:
	W0307 10:18:30.999235    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:18:30.999239    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:18:30.999243    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:18:30.999245    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:18:33.568064    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:33.568303    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:33.595091    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:18:33.595217    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:33.613173    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:18:33.613258    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:33.626647    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:18:33.626726    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:33.641784    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:18:33.641852    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:33.652045    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:18:33.652113    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:33.662259    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:18:33.662335    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:33.672362    4364 logs.go:276] 0 containers: []
	W0307 10:18:33.672375    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:33.672430    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:33.683019    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:18:33.683034    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:18:33.683039    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:18:33.697295    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:18:33.697305    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:18:33.709449    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:33.709459    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:33.713978    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:18:33.713984    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:18:33.728078    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:18:33.728091    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:18:33.739086    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:18:33.739098    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:18:33.753868    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:18:33.753878    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:33.767586    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:33.767596    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:33.802116    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:18:33.802127    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:18:33.816122    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:18:33.816133    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:18:33.828015    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:18:33.828027    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:18:33.839268    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:33.839281    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:33.862990    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:33.862998    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:18:33.901534    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:18:33.901553    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:18:33.928697    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:18:33.928715    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:18:33.946455    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:18:33.946465    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:18:33.959750    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:18:33.959762    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:18:36.473266    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:41.475387    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:41.475533    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:41.486900    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:18:41.486977    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:41.502077    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:18:41.502147    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:41.512625    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:18:41.512689    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:41.524488    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:18:41.524559    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:41.534526    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:18:41.534592    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:41.545241    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:18:41.545299    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:41.555228    4364 logs.go:276] 0 containers: []
	W0307 10:18:41.555245    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:41.555305    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:41.565906    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:18:41.565924    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:41.565929    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:18:41.605449    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:18:41.605463    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:18:41.619497    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:18:41.619508    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:18:41.630915    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:18:41.630928    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:18:41.645736    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:41.645747    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:41.668094    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:18:41.668101    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:41.679316    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:41.679327    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:41.683393    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:18:41.683400    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:18:41.694570    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:18:41.694580    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:18:41.706475    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:18:41.706488    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:18:41.717526    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:41.717537    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:41.003051    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:41.752674    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:18:41.752687    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:18:41.766366    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:18:41.766376    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:18:41.791881    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:18:41.791891    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:18:41.806946    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:18:41.806956    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:18:41.825050    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:18:41.825060    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:18:41.843452    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:18:41.843462    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:18:44.356944    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:46.005289    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:46.005490    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:46.027688    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:18:46.027811    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:46.044365    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:18:46.044443    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:46.057048    4223 logs.go:276] 4 containers: [30ac341fe864 78686b00c83b 8f40abedda95 b646ef99863b]
	I0307 10:18:46.057123    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:46.079140    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:18:46.079213    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:46.092176    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:18:46.092244    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:46.102843    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:18:46.102913    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:46.113318    4223 logs.go:276] 0 containers: []
	W0307 10:18:46.113330    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:46.113387    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:46.123496    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:18:46.123513    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:18:46.123519    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:18:46.137941    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:46.137955    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:18:46.171640    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:18:46.171732    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:18:46.172661    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:18:46.172666    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:18:46.184698    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:18:46.184709    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:18:46.199439    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:18:46.199451    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:18:46.223310    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:18:46.223321    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:46.234691    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:18:46.234703    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:18:46.249064    4223 logs.go:123] Gathering logs for coredns [30ac341fe864] ...
	I0307 10:18:46.249074    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30ac341fe864"
	I0307 10:18:46.260559    4223 logs.go:123] Gathering logs for coredns [78686b00c83b] ...
	I0307 10:18:46.260574    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78686b00c83b"
	I0307 10:18:46.271766    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:18:46.271779    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:18:46.283217    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:18:46.283228    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:18:46.295200    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:46.295210    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:46.320441    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:46.320451    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:46.324700    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:46.324705    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:46.358718    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:18:46.358732    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:18:46.370876    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:18:46.370886    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:18:46.370911    4223 out.go:239] X Problems detected in kubelet:
	W0307 10:18:46.370916    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:18:46.370921    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:18:46.370925    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:18:46.370928    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:18:49.359153    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:49.359383    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:49.382343    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:18:49.382450    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:49.398160    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:18:49.398243    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:49.414445    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:18:49.414518    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:49.425348    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:18:49.425416    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:49.435622    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:18:49.435692    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:49.448368    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:18:49.448444    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:49.458289    4364 logs.go:276] 0 containers: []
	W0307 10:18:49.458303    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:49.458362    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:49.469017    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:18:49.469036    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:18:49.469042    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:18:49.483858    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:18:49.483868    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:18:49.499000    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:18:49.499011    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:18:49.510897    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:18:49.510909    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:18:49.526010    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:18:49.526021    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:18:49.550705    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:18:49.550716    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:18:49.564754    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:18:49.564764    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:18:49.580859    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:49.580869    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:49.584890    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:49.584899    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:49.620134    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:18:49.620144    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:18:49.633499    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:18:49.633509    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:49.645632    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:49.645646    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:49.668547    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:49.668554    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:18:49.706474    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:18:49.706482    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:18:49.724555    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:18:49.724565    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:18:49.736507    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:18:49.736519    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:18:49.748711    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:18:49.748722    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:18:52.267379    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:56.373869    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:57.269323    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:57.269668    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:57.305069    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:18:57.305192    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:57.324615    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:18:57.324700    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:57.339010    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:18:57.339087    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:57.350784    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:18:57.350854    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:57.361449    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:18:57.361525    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:57.371808    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:18:57.371880    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:57.387669    4364 logs.go:276] 0 containers: []
	W0307 10:18:57.387681    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:57.387740    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:57.398345    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:18:57.398362    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:18:57.398369    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:18:57.413376    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:18:57.413388    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:18:57.424326    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:18:57.424336    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:57.436660    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:18:57.436672    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:18:57.447985    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:18:57.447997    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:18:57.464779    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:18:57.464793    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:18:57.480408    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:18:57.480418    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:18:57.494219    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:18:57.494230    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:18:57.512504    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:18:57.512514    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:18:57.526557    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:18:57.526568    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:18:57.551966    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:18:57.551977    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:18:57.565610    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:18:57.565621    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:18:57.577182    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:57.577191    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:57.581827    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:57.581837    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:57.617849    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:57.617864    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:57.641593    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:57.641603    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:18:57.679837    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:18:57.679846    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:19:00.196273    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:01.376144    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:01.376498    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:19:01.410712    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:19:01.410845    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:19:01.431213    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:19:01.431324    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:19:01.446088    4223 logs.go:276] 4 containers: [30ac341fe864 78686b00c83b 8f40abedda95 b646ef99863b]
	I0307 10:19:01.446171    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:19:01.462299    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:19:01.462367    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:19:01.474530    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:19:01.474598    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:19:01.485803    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:19:01.485874    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:19:01.496253    4223 logs.go:276] 0 containers: []
	W0307 10:19:01.496263    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:19:01.496315    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:19:01.506956    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:19:01.506976    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:19:01.506981    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:19:01.541687    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:19:01.541786    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:19:01.542709    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:19:01.542715    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:19:01.561426    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:19:01.561437    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:19:01.572928    4223 logs.go:123] Gathering logs for coredns [30ac341fe864] ...
	I0307 10:19:01.572941    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30ac341fe864"
	I0307 10:19:01.584969    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:19:01.584980    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:19:01.605009    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:19:01.605022    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:19:01.616548    4223 logs.go:123] Gathering logs for coredns [78686b00c83b] ...
	I0307 10:19:01.616561    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78686b00c83b"
	I0307 10:19:01.628466    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:19:01.628476    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:19:01.640526    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:19:01.640535    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:19:01.651748    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:19:01.651762    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:19:01.664170    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:19:01.664183    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:19:01.681337    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:19:01.681347    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:19:01.686126    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:19:01.686133    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:19:01.723608    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:19:01.723624    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:19:01.738328    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:19:01.738339    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:19:01.764147    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:19:01.764155    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:19:01.764179    4223 out.go:239] X Problems detected in kubelet:
	W0307 10:19:01.764183    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:19:01.764199    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:19:01.764205    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:19:01.764209    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:19:05.198606    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:05.198960    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:19:05.228455    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:19:05.228589    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:19:05.247816    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:19:05.247909    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:19:05.262103    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:19:05.262184    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:19:05.273602    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:19:05.273676    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:19:05.288316    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:19:05.288390    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:19:05.299153    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:19:05.299231    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:19:05.309827    4364 logs.go:276] 0 containers: []
	W0307 10:19:05.309844    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:19:05.309904    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:19:05.325065    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:19:05.325084    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:19:05.325089    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:19:05.362685    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:19:05.362697    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:19:05.392313    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:19:05.392324    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:19:05.406081    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:19:05.406092    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:19:05.418180    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:19:05.418192    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:19:05.431813    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:19:05.431822    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:19:05.445919    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:19:05.445930    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:19:05.462330    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:19:05.462341    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:19:05.473526    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:19:05.473541    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:19:05.485492    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:19:05.485502    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:19:05.498978    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:19:05.498988    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:19:05.516578    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:19:05.516588    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:19:05.541447    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:19:05.541469    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:19:05.547953    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:19:05.547966    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:19:05.575487    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:19:05.575498    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:19:05.610078    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:19:05.610090    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:19:05.625044    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:19:05.625055    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:19:08.138097    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:11.766945    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:13.140308    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:13.140505    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:19:13.152409    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:19:13.152489    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:19:13.163877    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:19:13.163943    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:19:13.174908    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:19:13.174981    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:19:13.186063    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:19:13.186133    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:19:13.196630    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:19:13.196694    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:19:13.207266    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:19:13.207340    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:19:13.217255    4364 logs.go:276] 0 containers: []
	W0307 10:19:13.217268    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:19:13.217325    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:19:13.227612    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:19:13.227627    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:19:13.227633    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:19:13.232097    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:19:13.232102    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:19:13.246220    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:19:13.246230    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:19:13.261062    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:19:13.261073    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:19:13.282906    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:19:13.282921    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:19:13.308183    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:19:13.308195    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:19:13.319828    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:19:13.319838    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:19:13.358207    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:19:13.358218    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:19:13.376679    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:19:13.376690    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:19:13.390287    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:19:13.390298    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:19:13.404112    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:19:13.404122    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:19:13.444926    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:19:13.444938    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:19:13.459054    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:19:13.459066    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:19:13.474519    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:19:13.474530    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:19:13.486040    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:19:13.486054    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:19:13.497564    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:19:13.497575    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:19:13.514032    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:19:13.514044    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:19:16.027949    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:16.769191    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:16.769362    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:19:16.794046    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:19:16.794159    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:19:16.810002    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:19:16.810084    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:19:16.822568    4223 logs.go:276] 4 containers: [30ac341fe864 78686b00c83b 8f40abedda95 b646ef99863b]
	I0307 10:19:16.822646    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:19:16.833471    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:19:16.833535    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:19:16.844175    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:19:16.844238    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:19:16.854970    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:19:16.855028    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:19:16.864935    4223 logs.go:276] 0 containers: []
	W0307 10:19:16.864947    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:19:16.865005    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:19:16.874981    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:19:16.875000    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:19:16.875006    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:19:16.887119    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:19:16.887131    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:19:16.900410    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:19:16.900422    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:19:16.937856    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:19:16.937955    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:19:16.938918    4223 logs.go:123] Gathering logs for coredns [78686b00c83b] ...
	I0307 10:19:16.938930    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78686b00c83b"
	I0307 10:19:16.950681    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:19:16.950691    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:19:16.975448    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:19:16.975462    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:19:16.987176    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:19:16.987187    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:19:17.022534    4223 logs.go:123] Gathering logs for coredns [30ac341fe864] ...
	I0307 10:19:17.022548    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30ac341fe864"
	I0307 10:19:17.039156    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:19:17.039167    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:19:17.050737    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:19:17.050748    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:19:17.062579    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:19:17.062589    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:19:17.079358    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:19:17.079368    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:19:17.098185    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:19:17.098193    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:19:17.102635    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:19:17.102643    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:19:17.117363    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:19:17.117375    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:19:17.130935    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:19:17.130944    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:19:17.130968    4223 out.go:239] X Problems detected in kubelet:
	W0307 10:19:17.130972    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:19:17.130976    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:19:17.130979    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:19:17.130983    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:19:21.030184    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:21.030305    4364 kubeadm.go:591] duration metric: took 4m3.836076167s to restartPrimaryControlPlane
	W0307 10:19:21.030421    4364 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0307 10:19:21.030468    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0307 10:19:22.123367    4364 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.092919292s)
	I0307 10:19:22.123431    4364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 10:19:22.128194    4364 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 10:19:22.131024    4364 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 10:19:22.133536    4364 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 10:19:22.133543    4364 kubeadm.go:156] found existing configuration files:
	
	I0307 10:19:22.133567    4364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf
	I0307 10:19:22.136305    4364 kubeadm.go:162] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 10:19:22.136344    4364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 10:19:22.139431    4364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf
	I0307 10:19:22.142157    4364 kubeadm.go:162] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 10:19:22.142182    4364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 10:19:22.144788    4364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf
	I0307 10:19:22.147831    4364 kubeadm.go:162] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 10:19:22.147855    4364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 10:19:22.150653    4364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf
	I0307 10:19:22.153225    4364 kubeadm.go:162] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 10:19:22.153247    4364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 10:19:22.156491    4364 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 10:19:22.174132    4364 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0307 10:19:22.174165    4364 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 10:19:22.222800    4364 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 10:19:22.222860    4364 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 10:19:22.222909    4364 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 10:19:22.271411    4364 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 10:19:22.275990    4364 out.go:204]   - Generating certificates and keys ...
	I0307 10:19:22.276067    4364 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 10:19:22.276108    4364 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 10:19:22.276157    4364 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0307 10:19:22.276195    4364 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0307 10:19:22.276233    4364 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0307 10:19:22.276261    4364 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0307 10:19:22.276293    4364 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0307 10:19:22.276331    4364 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0307 10:19:22.276408    4364 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0307 10:19:22.276455    4364 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0307 10:19:22.276479    4364 kubeadm.go:309] [certs] Using the existing "sa" key
	I0307 10:19:22.276523    4364 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 10:19:22.316617    4364 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 10:19:22.390999    4364 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 10:19:22.437089    4364 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 10:19:22.617118    4364 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 10:19:22.646757    4364 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 10:19:22.647121    4364 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 10:19:22.647151    4364 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 10:19:22.716160    4364 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 10:19:22.723639    4364 out.go:204]   - Booting up control plane ...
	I0307 10:19:22.723689    4364 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 10:19:22.723732    4364 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 10:19:22.723767    4364 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 10:19:22.723814    4364 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 10:19:22.723896    4364 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 10:19:27.134778    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:27.224543    4364 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501320 seconds
	I0307 10:19:27.224621    4364 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 10:19:27.228461    4364 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 10:19:27.739336    4364 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 10:19:27.739711    4364 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-853000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 10:19:28.243917    4364 kubeadm.go:309] [bootstrap-token] Using token: rpjmeh.3x67i5b5l73s4022
	I0307 10:19:28.247695    4364 out.go:204]   - Configuring RBAC rules ...
	I0307 10:19:28.247773    4364 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 10:19:28.249766    4364 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 10:19:28.255361    4364 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 10:19:28.256217    4364 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 10:19:28.256946    4364 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 10:19:28.257789    4364 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 10:19:28.260676    4364 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 10:19:28.404899    4364 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 10:19:28.653186    4364 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 10:19:28.653747    4364 kubeadm.go:309] 
	I0307 10:19:28.653790    4364 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 10:19:28.653803    4364 kubeadm.go:309] 
	I0307 10:19:28.653846    4364 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 10:19:28.653851    4364 kubeadm.go:309] 
	I0307 10:19:28.653864    4364 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 10:19:28.653901    4364 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 10:19:28.653930    4364 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 10:19:28.653934    4364 kubeadm.go:309] 
	I0307 10:19:28.653968    4364 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 10:19:28.653974    4364 kubeadm.go:309] 
	I0307 10:19:28.654005    4364 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 10:19:28.654008    4364 kubeadm.go:309] 
	I0307 10:19:28.654033    4364 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 10:19:28.654079    4364 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 10:19:28.654124    4364 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 10:19:28.654128    4364 kubeadm.go:309] 
	I0307 10:19:28.654184    4364 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 10:19:28.654239    4364 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 10:19:28.654242    4364 kubeadm.go:309] 
	I0307 10:19:28.654298    4364 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token rpjmeh.3x67i5b5l73s4022 \
	I0307 10:19:28.654361    4364 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4f3f709732e35797580c9b8d11f82ef6c52734bdc7940106dd5836141654d720 \
	I0307 10:19:28.654375    4364 kubeadm.go:309] 	--control-plane 
	I0307 10:19:28.654380    4364 kubeadm.go:309] 
	I0307 10:19:28.654421    4364 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 10:19:28.654424    4364 kubeadm.go:309] 
	I0307 10:19:28.654479    4364 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token rpjmeh.3x67i5b5l73s4022 \
	I0307 10:19:28.654537    4364 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4f3f709732e35797580c9b8d11f82ef6c52734bdc7940106dd5836141654d720 
	I0307 10:19:28.654649    4364 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 10:19:28.654658    4364 cni.go:84] Creating CNI manager for ""
	I0307 10:19:28.654666    4364 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:19:28.659306    4364 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0307 10:19:28.666327    4364 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0307 10:19:28.669430    4364 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0307 10:19:28.674419    4364 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 10:19:28.674460    4364 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 10:19:28.674486    4364 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-853000 minikube.k8s.io/updated_at=2024_03_07T10_19_28_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c2be9b5c96c7962d271300acacf405d2402b272f minikube.k8s.io/name=stopped-upgrade-853000 minikube.k8s.io/primary=true
	I0307 10:19:28.714941    4364 kubeadm.go:1106] duration metric: took 40.516833ms to wait for elevateKubeSystemPrivileges
	I0307 10:19:28.714946    4364 ops.go:34] apiserver oom_adj: -16
	W0307 10:19:28.714964    4364 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 10:19:28.714967    4364 kubeadm.go:393] duration metric: took 4m11.53410675s to StartCluster
	I0307 10:19:28.714976    4364 settings.go:142] acquiring lock: {Name:mke72688bb63f8128eac153bbf90929d78ec9d29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:19:28.715052    4364 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:19:28.715446    4364 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/kubeconfig: {Name:mkeef9e7922e618c2ac8219607b646aeaf5f61cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:19:28.715633    4364 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:19:28.719310    4364 out.go:177] * Verifying Kubernetes components...
	I0307 10:19:28.715701    4364 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 10:19:28.715815    4364 config.go:182] Loaded profile config "stopped-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:19:28.727252    4364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:19:28.727254    4364 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-853000"
	I0307 10:19:28.727257    4364 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-853000"
	I0307 10:19:28.727270    4364 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-853000"
	W0307 10:19:28.727273    4364 addons.go:243] addon storage-provisioner should already be in state true
	I0307 10:19:28.727273    4364 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-853000"
	I0307 10:19:28.727285    4364 host.go:66] Checking if "stopped-upgrade-853000" exists ...
	I0307 10:19:28.728557    4364 kapi.go:59] client config for stopped-upgrade-853000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/client.key", CAFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1016e36a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 10:19:28.728680    4364 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-853000"
	W0307 10:19:28.728685    4364 addons.go:243] addon default-storageclass should already be in state true
	I0307 10:19:28.728697    4364 host.go:66] Checking if "stopped-upgrade-853000" exists ...
	I0307 10:19:28.733269    4364 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:19:28.737380    4364 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 10:19:28.737397    4364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 10:19:28.737411    4364 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/id_rsa Username:docker}
	I0307 10:19:28.738358    4364 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 10:19:28.738363    4364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 10:19:28.738368    4364 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/id_rsa Username:docker}
	I0307 10:19:28.802296    4364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 10:19:28.808079    4364 api_server.go:52] waiting for apiserver process to appear ...
	I0307 10:19:28.808150    4364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:19:28.812500    4364 api_server.go:72] duration metric: took 96.859417ms to wait for apiserver process to appear ...
	I0307 10:19:28.812508    4364 api_server.go:88] waiting for apiserver healthz status ...
	I0307 10:19:28.812515    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:28.818953    4364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 10:19:28.863481    4364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 10:19:32.136914    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:32.137158    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:19:32.158817    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:19:32.158914    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:19:32.174503    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:19:32.174579    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:19:32.185643    4223 logs.go:276] 4 containers: [30ac341fe864 78686b00c83b 8f40abedda95 b646ef99863b]
	I0307 10:19:32.185718    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:19:32.219834    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:19:32.219910    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:19:32.233826    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:19:32.233898    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:19:32.244538    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:19:32.244606    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:19:32.258843    4223 logs.go:276] 0 containers: []
	W0307 10:19:32.258858    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:19:32.258917    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:19:32.269264    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:19:32.269282    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:19:32.269288    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:19:32.273886    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:19:32.273893    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:19:32.287903    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:19:32.287913    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:19:32.302923    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:19:32.302934    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:19:32.326147    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:19:32.326156    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:19:32.359345    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:19:32.359438    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:19:32.360338    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:19:32.360343    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:19:32.374097    4223 logs.go:123] Gathering logs for coredns [78686b00c83b] ...
	I0307 10:19:32.374108    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78686b00c83b"
	I0307 10:19:32.386120    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:19:32.386135    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:19:32.398300    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:19:32.398310    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:19:32.434768    4223 logs.go:123] Gathering logs for coredns [30ac341fe864] ...
	I0307 10:19:32.434779    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30ac341fe864"
	I0307 10:19:32.446747    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:19:32.446758    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:19:32.469263    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:19:32.469275    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:19:32.481066    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:19:32.481079    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:19:32.492968    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:19:32.492981    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:19:32.505218    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:19:32.505229    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:19:32.516548    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:19:32.516558    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:19:32.516584    4223 out.go:239] X Problems detected in kubelet:
	W0307 10:19:32.516592    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:19:32.516595    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:19:32.516599    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:19:32.516602    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:19:33.814495    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:33.814526    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:38.814655    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:38.814696    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:42.520406    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:43.814879    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:43.814914    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:47.522471    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:47.522621    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:19:47.537647    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:19:47.537733    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:19:47.549809    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:19:47.549878    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:19:47.560133    4223 logs.go:276] 4 containers: [30ac341fe864 78686b00c83b 8f40abedda95 b646ef99863b]
	I0307 10:19:47.560212    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:19:47.570497    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:19:47.570568    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:19:47.581358    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:19:47.581423    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:19:47.591867    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:19:47.591942    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:19:47.611285    4223 logs.go:276] 0 containers: []
	W0307 10:19:47.611296    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:19:47.611354    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:19:47.622392    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:19:47.622409    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:19:47.622415    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:19:48.815259    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:48.815295    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:47.657219    4223 logs.go:123] Gathering logs for coredns [30ac341fe864] ...
	I0307 10:19:47.657230    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30ac341fe864"
	I0307 10:19:47.669600    4223 logs.go:123] Gathering logs for coredns [78686b00c83b] ...
	I0307 10:19:47.669611    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78686b00c83b"
	I0307 10:19:47.689172    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:19:47.689183    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:19:47.701367    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:19:47.701376    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:19:47.725309    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:19:47.725317    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:19:47.741215    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:19:47.741229    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:19:47.752675    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:19:47.752685    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:19:47.757064    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:19:47.757075    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:19:47.771442    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:19:47.771455    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:19:47.785575    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:19:47.785587    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:19:47.797290    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:19:47.797301    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:19:47.831262    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:19:47.831357    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:19:47.832259    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:19:47.832266    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:19:47.846898    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:19:47.846909    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:19:47.864560    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:19:47.864569    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:19:47.876378    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:19:47.876388    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:19:47.876414    4223 out.go:239] X Problems detected in kubelet:
	W0307 10:19:47.876418    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:19:47.876422    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:19:47.876426    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:19:47.876429    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:19:53.815743    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:53.815773    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:58.816711    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:58.816763    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0307 10:19:59.166567    4364 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0307 10:19:59.171715    4364 out.go:177] * Enabled addons: storage-provisioner
	I0307 10:19:59.182688    4364 addons.go:505] duration metric: took 30.468028875s for enable addons: enabled=[storage-provisioner]
	I0307 10:19:57.880292    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:20:03.817736    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:20:03.817780    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:20:02.882425    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:20:02.882678    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:20:02.906726    4223 logs.go:276] 1 containers: [bfad85a2aa85]
	I0307 10:20:02.906843    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:20:02.923666    4223 logs.go:276] 1 containers: [f5dd3c2f1586]
	I0307 10:20:02.923737    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:20:02.936749    4223 logs.go:276] 4 containers: [30ac341fe864 78686b00c83b 8f40abedda95 b646ef99863b]
	I0307 10:20:02.936832    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:20:02.953028    4223 logs.go:276] 1 containers: [89b036ed2ce0]
	I0307 10:20:02.953098    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:20:02.963778    4223 logs.go:276] 1 containers: [91458cddd2a8]
	I0307 10:20:02.963849    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:20:02.974543    4223 logs.go:276] 1 containers: [a5eda657976b]
	I0307 10:20:02.974620    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:20:02.984389    4223 logs.go:276] 0 containers: []
	W0307 10:20:02.984405    4223 logs.go:278] No container was found matching "kindnet"
	I0307 10:20:02.984459    4223 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:20:02.995567    4223 logs.go:276] 1 containers: [e098dcf633e8]
	I0307 10:20:02.995585    4223 logs.go:123] Gathering logs for container status ...
	I0307 10:20:02.995590    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:20:03.007181    4223 logs.go:123] Gathering logs for kubelet ...
	I0307 10:20:03.007194    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 10:20:03.040896    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:20:03.040989    4223 logs.go:138] Found kubelet problem: Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:20:03.041917    4223 logs.go:123] Gathering logs for dmesg ...
	I0307 10:20:03.041925    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:20:03.046171    4223 logs.go:123] Gathering logs for etcd [f5dd3c2f1586] ...
	I0307 10:20:03.046177    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5dd3c2f1586"
	I0307 10:20:03.060423    4223 logs.go:123] Gathering logs for coredns [b646ef99863b] ...
	I0307 10:20:03.060434    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b646ef99863b"
	I0307 10:20:03.072838    4223 logs.go:123] Gathering logs for coredns [8f40abedda95] ...
	I0307 10:20:03.072852    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f40abedda95"
	I0307 10:20:03.084584    4223 logs.go:123] Gathering logs for kube-scheduler [89b036ed2ce0] ...
	I0307 10:20:03.084595    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89b036ed2ce0"
	I0307 10:20:03.103523    4223 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:20:03.103533    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:20:03.138298    4223 logs.go:123] Gathering logs for kube-apiserver [bfad85a2aa85] ...
	I0307 10:20:03.138311    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfad85a2aa85"
	I0307 10:20:03.155889    4223 logs.go:123] Gathering logs for kube-proxy [91458cddd2a8] ...
	I0307 10:20:03.155903    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91458cddd2a8"
	I0307 10:20:03.181402    4223 logs.go:123] Gathering logs for kube-controller-manager [a5eda657976b] ...
	I0307 10:20:03.181415    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5eda657976b"
	I0307 10:20:03.199621    4223 logs.go:123] Gathering logs for storage-provisioner [e098dcf633e8] ...
	I0307 10:20:03.199632    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e098dcf633e8"
	I0307 10:20:03.210365    4223 logs.go:123] Gathering logs for Docker ...
	I0307 10:20:03.210375    4223 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:20:03.239986    4223 logs.go:123] Gathering logs for coredns [30ac341fe864] ...
	I0307 10:20:03.239997    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30ac341fe864"
	I0307 10:20:03.251003    4223 logs.go:123] Gathering logs for coredns [78686b00c83b] ...
	I0307 10:20:03.251013    4223 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78686b00c83b"
	I0307 10:20:03.262188    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:20:03.262200    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 10:20:03.262225    4223 out.go:239] X Problems detected in kubelet:
	W0307 10:20:03.262231    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	W0307 10:20:03.262234    4223 out.go:239]   Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	I0307 10:20:03.262240    4223 out.go:304] Setting ErrFile to fd 2...
	I0307 10:20:03.262243    4223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:20:08.819244    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:20:08.819303    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:20:13.821093    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:20:13.821123    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:20:13.266097    4223 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:20:18.268228    4223 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:20:18.272573    4223 out.go:177] 
	W0307 10:20:18.276440    4223 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0307 10:20:18.276447    4223 out.go:239] * 
	W0307 10:20:18.276914    4223 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:20:18.286488    4223 out.go:177] 
	I0307 10:20:18.823143    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:20:18.823169    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:20:23.823776    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:20:23.823806    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:20:28.825843    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:20:28.826037    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:20:28.858511    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:20:28.858596    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:20:28.883168    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:20:28.883245    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:20:28.895128    4364 logs.go:276] 2 containers: [8c62957ec4e6 418ffa66d7e1]
	I0307 10:20:28.895208    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:20:28.910103    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:20:28.910174    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:20:28.920723    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:20:28.920794    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:20:28.931116    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:20:28.931181    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:20:28.940979    4364 logs.go:276] 0 containers: []
	W0307 10:20:28.940993    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:20:28.941060    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:20:28.951106    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:20:28.951120    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:20:28.951125    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:20:28.973050    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:20:28.973065    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:20:28.984986    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:20:28.984998    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:20:29.008507    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:20:29.008519    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:20:29.021051    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:20:29.021064    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:20:29.039197    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:20:29.039208    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:20:29.050966    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:20:29.050977    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:20:29.085213    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:20:29.085229    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:20:29.090264    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:20:29.090271    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:20:29.126884    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:20:29.126896    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:20:29.141681    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:20:29.141695    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:20:29.155657    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:20:29.155668    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:20:29.173469    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:20:29.173482    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:20:31.687440    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Thu 2024-03-07 18:11:20 UTC, ends at Thu 2024-03-07 18:20:34 UTC. --
	Mar 07 18:20:16 running-upgrade-064000 dockerd[2894]: time="2024-03-07T18:20:16.405347309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 18:20:16 running-upgrade-064000 dockerd[2894]: time="2024-03-07T18:20:16.405353224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 18:20:16 running-upgrade-064000 dockerd[2894]: time="2024-03-07T18:20:16.405404133Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/02adf024a96e54ac6dd70a145f6180067a9d106b34dc84889d1adaef5d1fe097 pid=16891 runtime=io.containerd.runc.v2
	Mar 07 18:20:16 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:16Z" level=error msg="ContainerStats resp: {0x4000984a00 linux}"
	Mar 07 18:20:16 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:16Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 07 18:20:17 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:17Z" level=error msg="ContainerStats resp: {0x40008ea300 linux}"
	Mar 07 18:20:17 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:17Z" level=error msg="ContainerStats resp: {0x40008ea6c0 linux}"
	Mar 07 18:20:17 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:17Z" level=error msg="ContainerStats resp: {0x40008ea800 linux}"
	Mar 07 18:20:17 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:17Z" level=error msg="ContainerStats resp: {0x40008eaf80 linux}"
	Mar 07 18:20:17 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:17Z" level=error msg="ContainerStats resp: {0x4000a62b00 linux}"
	Mar 07 18:20:17 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:17Z" level=error msg="ContainerStats resp: {0x4000a621c0 linux}"
	Mar 07 18:20:17 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:17Z" level=error msg="ContainerStats resp: {0x4000a625c0 linux}"
	Mar 07 18:20:21 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:21Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 07 18:20:26 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:26Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 07 18:20:27 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:27Z" level=error msg="ContainerStats resp: {0x4000a1eb80 linux}"
	Mar 07 18:20:27 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:27Z" level=error msg="ContainerStats resp: {0x40008f3300 linux}"
	Mar 07 18:20:28 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:28Z" level=error msg="ContainerStats resp: {0x40009c8040 linux}"
	Mar 07 18:20:29 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:29Z" level=error msg="ContainerStats resp: {0x400041a640 linux}"
	Mar 07 18:20:29 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:29Z" level=error msg="ContainerStats resp: {0x400041ab40 linux}"
	Mar 07 18:20:29 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:29Z" level=error msg="ContainerStats resp: {0x40009c9600 linux}"
	Mar 07 18:20:29 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:29Z" level=error msg="ContainerStats resp: {0x40009c9900 linux}"
	Mar 07 18:20:29 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:29Z" level=error msg="ContainerStats resp: {0x40009c9c40 linux}"
	Mar 07 18:20:29 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:29Z" level=error msg="ContainerStats resp: {0x40009c9e00 linux}"
	Mar 07 18:20:29 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:29Z" level=error msg="ContainerStats resp: {0x40008c4f00 linux}"
	Mar 07 18:20:31 running-upgrade-064000 cri-dockerd[2737]: time="2024-03-07T18:20:31Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	02adf024a96e5       edaa71f2aee88       18 seconds ago      Running             coredns                   2                   11bdd91fe801f
	44d19b3f74837       edaa71f2aee88       18 seconds ago      Running             coredns                   2                   b74a5ad2b156f
	30ac341fe8649       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   b74a5ad2b156f
	78686b00c83b5       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   11bdd91fe801f
	91458cddd2a8e       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   1eae33f6e9e55
	e098dcf633e88       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   a9aca63463283
	bfad85a2aa85c       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   c5b5c19acc141
	89b036ed2ce06       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   64361e2993a22
	a5eda657976b1       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   7f10e5ff792f6
	f5dd3c2f15863       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   8ef37d0714105
	
	
	==> coredns [02adf024a96e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3568028320698403262.4169761230918128417. HINFO: read udp 10.244.0.3:46276->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3568028320698403262.4169761230918128417. HINFO: read udp 10.244.0.3:33338->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3568028320698403262.4169761230918128417. HINFO: read udp 10.244.0.3:55744->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3568028320698403262.4169761230918128417. HINFO: read udp 10.244.0.3:46642->10.0.2.3:53: i/o timeout
	
	
	==> coredns [30ac341fe864] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7099456311627794934.8489366121081164461. HINFO: read udp 10.244.0.2:52180->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7099456311627794934.8489366121081164461. HINFO: read udp 10.244.0.2:32962->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7099456311627794934.8489366121081164461. HINFO: read udp 10.244.0.2:48894->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7099456311627794934.8489366121081164461. HINFO: read udp 10.244.0.2:46722->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7099456311627794934.8489366121081164461. HINFO: read udp 10.244.0.2:38253->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7099456311627794934.8489366121081164461. HINFO: read udp 10.244.0.2:46533->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7099456311627794934.8489366121081164461. HINFO: read udp 10.244.0.2:60118->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7099456311627794934.8489366121081164461. HINFO: read udp 10.244.0.2:33904->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7099456311627794934.8489366121081164461. HINFO: read udp 10.244.0.2:42093->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7099456311627794934.8489366121081164461. HINFO: read udp 10.244.0.2:59655->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [44d19b3f7483] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4234977603254235766.5539316175847614172. HINFO: read udp 10.244.0.2:34894->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4234977603254235766.5539316175847614172. HINFO: read udp 10.244.0.2:54993->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4234977603254235766.5539316175847614172. HINFO: read udp 10.244.0.2:58122->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4234977603254235766.5539316175847614172. HINFO: read udp 10.244.0.2:53840->10.0.2.3:53: i/o timeout
	
	
	==> coredns [78686b00c83b] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4235913236490184937.779998996402491127. HINFO: read udp 10.244.0.3:34890->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4235913236490184937.779998996402491127. HINFO: read udp 10.244.0.3:39538->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4235913236490184937.779998996402491127. HINFO: read udp 10.244.0.3:48359->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4235913236490184937.779998996402491127. HINFO: read udp 10.244.0.3:45579->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4235913236490184937.779998996402491127. HINFO: read udp 10.244.0.3:57251->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-064000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-064000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c2be9b5c96c7962d271300acacf405d2402b272f
	                    minikube.k8s.io/name=running-upgrade-064000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T10_16_13_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 18:16:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-064000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 18:20:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 18:16:13 +0000   Thu, 07 Mar 2024 18:16:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 18:16:13 +0000   Thu, 07 Mar 2024 18:16:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 18:16:13 +0000   Thu, 07 Mar 2024 18:16:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 18:16:13 +0000   Thu, 07 Mar 2024 18:16:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-064000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c47a5587b9744c39193074bb8d36cc1
	  System UUID:                4c47a5587b9744c39193074bb8d36cc1
	  Boot ID:                    23e2b4ec-7b5c-4898-9e78-b856282e9cc0
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-fth54                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m6s
	  kube-system                 coredns-6d4b75cb6d-v4987                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m6s
	  kube-system                 etcd-running-upgrade-064000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-apiserver-running-upgrade-064000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-controller-manager-running-upgrade-064000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-proxy-tl544                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-scheduler-running-upgrade-064000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m5s   kube-proxy       
	  Normal  NodeReady                4m21s  kubelet          Node running-upgrade-064000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s  kubelet          Node running-upgrade-064000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s  kubelet          Node running-upgrade-064000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s  kubelet          Node running-upgrade-064000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m21s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m7s   node-controller  Node running-upgrade-064000 event: Registered Node running-upgrade-064000 in Controller
	
	
	==> dmesg <==
	[  +1.379594] systemd-fstab-generator[878]: Ignoring "noauto" for root device
	[  +0.074667] systemd-fstab-generator[889]: Ignoring "noauto" for root device
	[  +0.078277] systemd-fstab-generator[900]: Ignoring "noauto" for root device
	[  +1.137321] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.086640] systemd-fstab-generator[1050]: Ignoring "noauto" for root device
	[  +0.084105] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +2.651448] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[ +14.167482] systemd-fstab-generator[1957]: Ignoring "noauto" for root device
	[  +2.733731] systemd-fstab-generator[2232]: Ignoring "noauto" for root device
	[  +0.145101] systemd-fstab-generator[2266]: Ignoring "noauto" for root device
	[  +0.086944] systemd-fstab-generator[2277]: Ignoring "noauto" for root device
	[  +0.097419] systemd-fstab-generator[2290]: Ignoring "noauto" for root device
	[  +1.354330] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.157530] systemd-fstab-generator[2694]: Ignoring "noauto" for root device
	[  +0.080902] systemd-fstab-generator[2705]: Ignoring "noauto" for root device
	[  +0.086446] systemd-fstab-generator[2716]: Ignoring "noauto" for root device
	[  +0.080351] systemd-fstab-generator[2730]: Ignoring "noauto" for root device
	[  +2.049453] systemd-fstab-generator[2881]: Ignoring "noauto" for root device
	[Mar 7 18:12] systemd-fstab-generator[3250]: Ignoring "noauto" for root device
	[  +1.042707] systemd-fstab-generator[3377]: Ignoring "noauto" for root device
	[ +19.717527] kauditd_printk_skb: 68 callbacks suppressed
	[Mar 7 18:16] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.310685] systemd-fstab-generator[11571]: Ignoring "noauto" for root device
	[  +5.146097] systemd-fstab-generator[12157]: Ignoring "noauto" for root device
	[  +0.459446] systemd-fstab-generator[12289]: Ignoring "noauto" for root device
	
	
	==> etcd [f5dd3c2f1586] <==
	{"level":"info","ts":"2024-03-07T18:16:09.441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-03-07T18:16:09.442Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-03-07T18:16:09.444Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-07T18:16:09.444Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-07T18:16:09.444Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-07T18:16:09.444Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-07T18:16:09.444Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-07T18:16:10.037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-07T18:16:10.037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-07T18:16:10.037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-03-07T18:16:10.037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-03-07T18:16:10.037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-07T18:16:10.037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-03-07T18:16:10.037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-07T18:16:10.037Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T18:16:10.040Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T18:16:10.041Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T18:16:10.041Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T18:16:10.041Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-064000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-07T18:16:10.041Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T18:16:10.042Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-07T18:16:10.042Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T18:16:10.042Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-03-07T18:16:10.042Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-07T18:16:10.042Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:20:34 up 9 min,  0 users,  load average: 0.34, 0.36, 0.20
	Linux running-upgrade-064000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [bfad85a2aa85] <==
	I0307 18:16:11.259759       1 controller.go:611] quota admission added evaluator for: namespaces
	I0307 18:16:11.292738       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0307 18:16:11.292756       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0307 18:16:11.292763       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0307 18:16:11.292770       1 cache.go:39] Caches are synced for autoregister controller
	I0307 18:16:11.292816       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0307 18:16:11.300302       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0307 18:16:12.021451       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0307 18:16:12.196186       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0307 18:16:12.199193       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0307 18:16:12.199210       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0307 18:16:12.322185       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0307 18:16:12.333512       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0307 18:16:12.364602       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0307 18:16:12.366537       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0307 18:16:12.366878       1 controller.go:611] quota admission added evaluator for: endpoints
	I0307 18:16:12.368146       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0307 18:16:13.345086       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0307 18:16:13.694519       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0307 18:16:13.700324       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0307 18:16:13.705418       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0307 18:16:13.755972       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0307 18:16:27.305525       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0307 18:16:28.105298       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0307 18:16:29.212085       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [a5eda657976b] <==
	I0307 18:16:27.255689       1 shared_informer.go:262] Caches are synced for disruption
	I0307 18:16:27.255697       1 disruption.go:371] Sending events to api server.
	I0307 18:16:27.256908       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0307 18:16:27.259391       1 shared_informer.go:262] Caches are synced for namespace
	I0307 18:16:27.305698       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0307 18:16:27.306801       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0307 18:16:27.354849       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0307 18:16:27.354890       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0307 18:16:27.354899       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0307 18:16:27.354921       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0307 18:16:27.403154       1 shared_informer.go:262] Caches are synced for taint
	I0307 18:16:27.403263       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0307 18:16:27.403288       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-064000. Assuming now as a timestamp.
	I0307 18:16:27.403341       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0307 18:16:27.403396       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0307 18:16:27.403600       1 event.go:294] "Event occurred" object="running-upgrade-064000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-064000 event: Registered Node running-upgrade-064000 in Controller"
	I0307 18:16:27.452592       1 shared_informer.go:262] Caches are synced for HPA
	I0307 18:16:27.456845       1 shared_informer.go:262] Caches are synced for resource quota
	I0307 18:16:27.497064       1 shared_informer.go:262] Caches are synced for resource quota
	I0307 18:16:27.875216       1 shared_informer.go:262] Caches are synced for garbage collector
	I0307 18:16:27.905054       1 shared_informer.go:262] Caches are synced for garbage collector
	I0307 18:16:27.905062       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0307 18:16:28.108172       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tl544"
	I0307 18:16:28.256370       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-v4987"
	I0307 18:16:28.258115       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-fth54"
	
	
	==> kube-proxy [91458cddd2a8] <==
	I0307 18:16:29.197494       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0307 18:16:29.197526       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0307 18:16:29.197606       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0307 18:16:29.209702       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0307 18:16:29.209713       1 server_others.go:206] "Using iptables Proxier"
	I0307 18:16:29.209741       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0307 18:16:29.209926       1 server.go:661] "Version info" version="v1.24.1"
	I0307 18:16:29.209951       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 18:16:29.210380       1 config.go:317] "Starting service config controller"
	I0307 18:16:29.210420       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0307 18:16:29.210451       1 config.go:226] "Starting endpoint slice config controller"
	I0307 18:16:29.210473       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0307 18:16:29.211145       1 config.go:444] "Starting node config controller"
	I0307 18:16:29.211171       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0307 18:16:29.311974       1 shared_informer.go:262] Caches are synced for node config
	I0307 18:16:29.311989       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0307 18:16:29.311994       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [89b036ed2ce0] <==
	W0307 18:16:11.258088       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0307 18:16:11.258106       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0307 18:16:11.258169       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0307 18:16:11.258191       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0307 18:16:11.258280       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0307 18:16:11.258451       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0307 18:16:11.258498       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0307 18:16:11.258518       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0307 18:16:11.258554       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0307 18:16:11.258579       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0307 18:16:11.258603       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0307 18:16:11.258621       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0307 18:16:11.258921       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0307 18:16:11.258972       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0307 18:16:11.259195       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0307 18:16:11.259252       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0307 18:16:12.060950       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0307 18:16:12.060997       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0307 18:16:12.067325       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0307 18:16:12.067336       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0307 18:16:12.144335       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0307 18:16:12.144353       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0307 18:16:12.229773       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0307 18:16:12.229791       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0307 18:16:12.756764       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Thu 2024-03-07 18:11:20 UTC, ends at Thu 2024-03-07 18:20:34 UTC. --
	Mar 07 18:16:27 running-upgrade-064000 kubelet[12163]: I0307 18:16:27.409789   12163 topology_manager.go:200] "Topology Admit Handler"
	Mar 07 18:16:27 running-upgrade-064000 kubelet[12163]: I0307 18:16:27.558180   12163 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d62dd592-1f0f-40e3-b7ab-08f8ee141a55-tmp\") pod \"storage-provisioner\" (UID: \"d62dd592-1f0f-40e3-b7ab-08f8ee141a55\") " pod="kube-system/storage-provisioner"
	Mar 07 18:16:27 running-upgrade-064000 kubelet[12163]: I0307 18:16:27.558302   12163 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnzjh\" (UniqueName: \"kubernetes.io/projected/d62dd592-1f0f-40e3-b7ab-08f8ee141a55-kube-api-access-bnzjh\") pod \"storage-provisioner\" (UID: \"d62dd592-1f0f-40e3-b7ab-08f8ee141a55\") " pod="kube-system/storage-provisioner"
	Mar 07 18:16:27 running-upgrade-064000 kubelet[12163]: E0307 18:16:27.661658   12163 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 07 18:16:27 running-upgrade-064000 kubelet[12163]: E0307 18:16:27.661677   12163 projected.go:192] Error preparing data for projected volume kube-api-access-bnzjh for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Mar 07 18:16:27 running-upgrade-064000 kubelet[12163]: E0307 18:16:27.661708   12163 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/d62dd592-1f0f-40e3-b7ab-08f8ee141a55-kube-api-access-bnzjh podName:d62dd592-1f0f-40e3-b7ab-08f8ee141a55 nodeName:}" failed. No retries permitted until 2024-03-07 18:16:28.161695624 +0000 UTC m=+14.481938032 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bnzjh" (UniqueName: "kubernetes.io/projected/d62dd592-1f0f-40e3-b7ab-08f8ee141a55-kube-api-access-bnzjh") pod "storage-provisioner" (UID: "d62dd592-1f0f-40e3-b7ab-08f8ee141a55") : configmap "kube-root-ca.crt" not found
	Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: I0307 18:16:28.111151   12163 topology_manager.go:200] "Topology Admit Handler"
	Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: I0307 18:16:28.260677   12163 topology_manager.go:200] "Topology Admit Handler"
	Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: I0307 18:16:28.261484   12163 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae0a3414-75b7-4214-932c-3a20d57ff291-lib-modules\") pod \"kube-proxy-tl544\" (UID: \"ae0a3414-75b7-4214-932c-3a20d57ff291\") " pod="kube-system/kube-proxy-tl544"
	Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: I0307 18:16:28.261523   12163 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfkv7\" (UniqueName: \"kubernetes.io/projected/ae0a3414-75b7-4214-932c-3a20d57ff291-kube-api-access-pfkv7\") pod \"kube-proxy-tl544\" (UID: \"ae0a3414-75b7-4214-932c-3a20d57ff291\") " pod="kube-system/kube-proxy-tl544"
	Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: I0307 18:16:28.261551   12163 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ae0a3414-75b7-4214-932c-3a20d57ff291-kube-proxy\") pod \"kube-proxy-tl544\" (UID: \"ae0a3414-75b7-4214-932c-3a20d57ff291\") " pod="kube-system/kube-proxy-tl544"
	Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: I0307 18:16:28.261574   12163 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae0a3414-75b7-4214-932c-3a20d57ff291-xtables-lock\") pod \"kube-proxy-tl544\" (UID: \"ae0a3414-75b7-4214-932c-3a20d57ff291\") " pod="kube-system/kube-proxy-tl544"
	Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: I0307 18:16:28.261926   12163 topology_manager.go:200] "Topology Admit Handler"
	Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: W0307 18:16:28.265321   12163 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: E0307 18:16:28.265367   12163 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-064000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-064000' and this object
	Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: I0307 18:16:28.362642   12163 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/897bcdfe-62b0-4886-8760-c2df48944494-config-volume\") pod \"coredns-6d4b75cb6d-v4987\" (UID: \"897bcdfe-62b0-4886-8760-c2df48944494\") " pod="kube-system/coredns-6d4b75cb6d-v4987"
	Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: I0307 18:16:28.362723   12163 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtvhh\" (UniqueName: \"kubernetes.io/projected/897bcdfe-62b0-4886-8760-c2df48944494-kube-api-access-jtvhh\") pod \"coredns-6d4b75cb6d-v4987\" (UID: \"897bcdfe-62b0-4886-8760-c2df48944494\") " pod="kube-system/coredns-6d4b75cb6d-v4987"
	Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: I0307 18:16:28.462978   12163 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51c19a10-c369-428e-80a5-6a16bc8659b0-config-volume\") pod \"coredns-6d4b75cb6d-fth54\" (UID: \"51c19a10-c369-428e-80a5-6a16bc8659b0\") " pod="kube-system/coredns-6d4b75cb6d-fth54"
	Mar 07 18:16:28 running-upgrade-064000 kubelet[12163]: I0307 18:16:28.463001   12163 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnj45\" (UniqueName: \"kubernetes.io/projected/51c19a10-c369-428e-80a5-6a16bc8659b0-kube-api-access-dnj45\") pod \"coredns-6d4b75cb6d-fth54\" (UID: \"51c19a10-c369-428e-80a5-6a16bc8659b0\") " pod="kube-system/coredns-6d4b75cb6d-fth54"
	Mar 07 18:16:29 running-upgrade-064000 kubelet[12163]: E0307 18:16:29.463507   12163 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Mar 07 18:16:29 running-upgrade-064000 kubelet[12163]: E0307 18:16:29.463545   12163 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/897bcdfe-62b0-4886-8760-c2df48944494-config-volume podName:897bcdfe-62b0-4886-8760-c2df48944494 nodeName:}" failed. No retries permitted until 2024-03-07 18:16:29.96353577 +0000 UTC m=+16.283778178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/897bcdfe-62b0-4886-8760-c2df48944494-config-volume") pod "coredns-6d4b75cb6d-v4987" (UID: "897bcdfe-62b0-4886-8760-c2df48944494") : failed to sync configmap cache: timed out waiting for the condition
	Mar 07 18:16:29 running-upgrade-064000 kubelet[12163]: E0307 18:16:29.563803   12163 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Mar 07 18:16:29 running-upgrade-064000 kubelet[12163]: E0307 18:16:29.563935   12163 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/51c19a10-c369-428e-80a5-6a16bc8659b0-config-volume podName:51c19a10-c369-428e-80a5-6a16bc8659b0 nodeName:}" failed. No retries permitted until 2024-03-07 18:16:30.06383649 +0000 UTC m=+16.384078898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/51c19a10-c369-428e-80a5-6a16bc8659b0-config-volume") pod "coredns-6d4b75cb6d-fth54" (UID: "51c19a10-c369-428e-80a5-6a16bc8659b0") : failed to sync configmap cache: timed out waiting for the condition
	Mar 07 18:20:17 running-upgrade-064000 kubelet[12163]: I0307 18:20:17.107160   12163 scope.go:110] "RemoveContainer" containerID="b646ef99863bd51c0940fc0aafd538d15f602ac3f1b2be3ff249dc4ed9004a77"
	Mar 07 18:20:17 running-upgrade-064000 kubelet[12163]: I0307 18:20:17.121207   12163 scope.go:110] "RemoveContainer" containerID="8f40abedda957319040b192af0bd30b81c70f2dac7ef45baa8ebc6e3856efa63"
	
	
	==> storage-provisioner [e098dcf633e8] <==
	I0307 18:16:28.515709       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0307 18:16:28.520946       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0307 18:16:28.521018       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0307 18:16:28.524173       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0307 18:16:28.524273       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-064000_0fea9b99-45e2-4aa8-9471-5f3be2829326!
	I0307 18:16:28.524712       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9808492a-d31e-4319-9090-da99c449cad2", APIVersion:"v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-064000_0fea9b99-45e2-4aa8-9471-5f3be2829326 became leader
	I0307 18:16:28.624751       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-064000_0fea9b99-45e2-4aa8-9471-5f3be2829326!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-064000 -n running-upgrade-064000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-064000 -n running-upgrade-064000: exit status 2 (15.662207917s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-064000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-064000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-064000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-064000: (2.331767625s)
--- FAIL: TestRunningBinaryUpgrade (627.17s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-726000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-726000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.770809708s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-726000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-726000" primary control-plane node in "kubernetes-upgrade-726000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-726000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:13:25.752054    4279 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:13:25.752170    4279 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:13:25.752173    4279 out.go:304] Setting ErrFile to fd 2...
	I0307 10:13:25.752175    4279 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:13:25.752306    4279 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:13:25.753372    4279 out.go:298] Setting JSON to false
	I0307 10:13:25.769716    4279 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4377,"bootTime":1709830828,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:13:25.769787    4279 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:13:25.774601    4279 out.go:177] * [kubernetes-upgrade-726000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:13:25.782617    4279 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:13:25.787546    4279 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:13:25.782663    4279 notify.go:220] Checking for updates...
	I0307 10:13:25.793550    4279 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:13:25.796471    4279 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:13:25.799570    4279 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:13:25.802599    4279 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:13:25.804489    4279 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:13:25.804557    4279 config.go:182] Loaded profile config "running-upgrade-064000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:13:25.804618    4279 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:13:25.808559    4279 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:13:25.815347    4279 start.go:297] selected driver: qemu2
	I0307 10:13:25.815353    4279 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:13:25.815358    4279 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:13:25.817667    4279 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:13:25.820563    4279 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:13:25.823642    4279 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 10:13:25.823687    4279 cni.go:84] Creating CNI manager for ""
	I0307 10:13:25.823694    4279 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0307 10:13:25.823716    4279 start.go:340] cluster config:
	{Name:kubernetes-upgrade-726000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-726000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:13:25.828429    4279 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:13:25.835570    4279 out.go:177] * Starting "kubernetes-upgrade-726000" primary control-plane node in "kubernetes-upgrade-726000" cluster
	I0307 10:13:25.839515    4279 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 10:13:25.839530    4279 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 10:13:25.839536    4279 cache.go:56] Caching tarball of preloaded images
	I0307 10:13:25.839614    4279 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:13:25.839620    4279 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0307 10:13:25.839678    4279 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/kubernetes-upgrade-726000/config.json ...
	I0307 10:13:25.839688    4279 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/kubernetes-upgrade-726000/config.json: {Name:mk801cdca127f830a4776ccda6baa4934fce144c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:13:25.839893    4279 start.go:360] acquireMachinesLock for kubernetes-upgrade-726000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:13:25.839925    4279 start.go:364] duration metric: took 24.917µs to acquireMachinesLock for "kubernetes-upgrade-726000"
	I0307 10:13:25.839937    4279 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-726000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-726000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:13:25.839967    4279 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:13:25.848491    4279 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 10:13:25.869607    4279 start.go:159] libmachine.API.Create for "kubernetes-upgrade-726000" (driver="qemu2")
	I0307 10:13:25.869631    4279 client.go:168] LocalClient.Create starting
	I0307 10:13:25.869691    4279 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:13:25.869726    4279 main.go:141] libmachine: Decoding PEM data...
	I0307 10:13:25.869734    4279 main.go:141] libmachine: Parsing certificate...
	I0307 10:13:25.869777    4279 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:13:25.869797    4279 main.go:141] libmachine: Decoding PEM data...
	I0307 10:13:25.869805    4279 main.go:141] libmachine: Parsing certificate...
	I0307 10:13:25.870167    4279 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:13:26.032507    4279 main.go:141] libmachine: Creating SSH key...
	I0307 10:13:26.081171    4279 main.go:141] libmachine: Creating Disk image...
	I0307 10:13:26.081177    4279 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:13:26.081344    4279 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/disk.qcow2
	I0307 10:13:26.096634    4279 main.go:141] libmachine: STDOUT: 
	I0307 10:13:26.096662    4279 main.go:141] libmachine: STDERR: 
	I0307 10:13:26.096741    4279 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/disk.qcow2 +20000M
	I0307 10:13:26.107863    4279 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:13:26.107884    4279 main.go:141] libmachine: STDERR: 
	I0307 10:13:26.107899    4279 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/disk.qcow2
	I0307 10:13:26.107907    4279 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:13:26.107935    4279 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:09:c6:d6:f6:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/disk.qcow2
	I0307 10:13:26.109829    4279 main.go:141] libmachine: STDOUT: 
	I0307 10:13:26.109845    4279 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:13:26.109862    4279 client.go:171] duration metric: took 240.234375ms to LocalClient.Create
	I0307 10:13:28.111940    4279 start.go:128] duration metric: took 2.272034292s to createHost
	I0307 10:13:28.111988    4279 start.go:83] releasing machines lock for "kubernetes-upgrade-726000", held for 2.272131584s
	W0307 10:13:28.112054    4279 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:13:28.122249    4279 out.go:177] * Deleting "kubernetes-upgrade-726000" in qemu2 ...
	W0307 10:13:28.144934    4279 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:13:28.144948    4279 start.go:728] Will try again in 5 seconds ...
	I0307 10:13:33.147008    4279 start.go:360] acquireMachinesLock for kubernetes-upgrade-726000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:13:33.147489    4279 start.go:364] duration metric: took 397.083µs to acquireMachinesLock for "kubernetes-upgrade-726000"
	I0307 10:13:33.147656    4279 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-726000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-726000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:13:33.147903    4279 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:13:33.157537    4279 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 10:13:33.206115    4279 start.go:159] libmachine.API.Create for "kubernetes-upgrade-726000" (driver="qemu2")
	I0307 10:13:33.206163    4279 client.go:168] LocalClient.Create starting
	I0307 10:13:33.206274    4279 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:13:33.206335    4279 main.go:141] libmachine: Decoding PEM data...
	I0307 10:13:33.206351    4279 main.go:141] libmachine: Parsing certificate...
	I0307 10:13:33.206412    4279 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:13:33.206454    4279 main.go:141] libmachine: Decoding PEM data...
	I0307 10:13:33.206465    4279 main.go:141] libmachine: Parsing certificate...
	I0307 10:13:33.206968    4279 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:13:33.357899    4279 main.go:141] libmachine: Creating SSH key...
	I0307 10:13:33.418630    4279 main.go:141] libmachine: Creating Disk image...
	I0307 10:13:33.418636    4279 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:13:33.418805    4279 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/disk.qcow2
	I0307 10:13:33.431345    4279 main.go:141] libmachine: STDOUT: 
	I0307 10:13:33.431367    4279 main.go:141] libmachine: STDERR: 
	I0307 10:13:33.431435    4279 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/disk.qcow2 +20000M
	I0307 10:13:33.442324    4279 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:13:33.442349    4279 main.go:141] libmachine: STDERR: 
	I0307 10:13:33.442361    4279 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/disk.qcow2
	I0307 10:13:33.442367    4279 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:13:33.442412    4279 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:de:54:8b:69:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/disk.qcow2
	I0307 10:13:33.444149    4279 main.go:141] libmachine: STDOUT: 
	I0307 10:13:33.444167    4279 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:13:33.444179    4279 client.go:171] duration metric: took 238.01725ms to LocalClient.Create
	I0307 10:13:35.446328    4279 start.go:128] duration metric: took 2.298457208s to createHost
	I0307 10:13:35.446414    4279 start.go:83] releasing machines lock for "kubernetes-upgrade-726000", held for 2.29893575s
	W0307 10:13:35.446811    4279 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-726000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-726000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:13:35.460395    4279 out.go:177] 
	W0307 10:13:35.464784    4279 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:13:35.464832    4279 out.go:239] * 
	* 
	W0307 10:13:35.466933    4279 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:13:35.478510    4279 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-726000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-726000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-726000: (3.486462625s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-726000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-726000 status --format={{.Host}}: exit status 7 (45.098875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-726000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-726000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.191112709s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-726000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-726000" primary control-plane node in "kubernetes-upgrade-726000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-726000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-726000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:13:39.056114    4324 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:13:39.056258    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:13:39.056262    4324 out.go:304] Setting ErrFile to fd 2...
	I0307 10:13:39.056264    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:13:39.056391    4324 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:13:39.057446    4324 out.go:298] Setting JSON to false
	I0307 10:13:39.073866    4324 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4391,"bootTime":1709830828,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:13:39.073952    4324 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:13:39.078669    4324 out.go:177] * [kubernetes-upgrade-726000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:13:39.084568    4324 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:13:39.088612    4324 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:13:39.084666    4324 notify.go:220] Checking for updates...
	I0307 10:13:39.094609    4324 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:13:39.097638    4324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:13:39.100683    4324 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:13:39.103526    4324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:13:39.106926    4324 config.go:182] Loaded profile config "kubernetes-upgrade-726000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0307 10:13:39.107177    4324 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:13:39.111729    4324 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 10:13:39.118684    4324 start.go:297] selected driver: qemu2
	I0307 10:13:39.118689    4324 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-726000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-726000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:13:39.118740    4324 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:13:39.121029    4324 cni.go:84] Creating CNI manager for ""
	I0307 10:13:39.121048    4324 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:13:39.121071    4324 start.go:340] cluster config:
	{Name:kubernetes-upgrade-726000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-726000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:13:39.125460    4324 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:13:39.133592    4324 out.go:177] * Starting "kubernetes-upgrade-726000" primary control-plane node in "kubernetes-upgrade-726000" cluster
	I0307 10:13:39.137588    4324 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 10:13:39.137611    4324 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0307 10:13:39.137622    4324 cache.go:56] Caching tarball of preloaded images
	I0307 10:13:39.137719    4324 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:13:39.137724    4324 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0307 10:13:39.137778    4324 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/kubernetes-upgrade-726000/config.json ...
	I0307 10:13:39.138304    4324 start.go:360] acquireMachinesLock for kubernetes-upgrade-726000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:13:39.138338    4324 start.go:364] duration metric: took 26.792µs to acquireMachinesLock for "kubernetes-upgrade-726000"
	I0307 10:13:39.138350    4324 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:13:39.138356    4324 fix.go:54] fixHost starting: 
	I0307 10:13:39.138471    4324 fix.go:112] recreateIfNeeded on kubernetes-upgrade-726000: state=Stopped err=<nil>
	W0307 10:13:39.138479    4324 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:13:39.142653    4324 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-726000" ...
	I0307 10:13:39.150563    4324 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:de:54:8b:69:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/disk.qcow2
	I0307 10:13:39.152503    4324 main.go:141] libmachine: STDOUT: 
	I0307 10:13:39.152521    4324 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:13:39.152550    4324 fix.go:56] duration metric: took 14.193583ms for fixHost
	I0307 10:13:39.152555    4324 start.go:83] releasing machines lock for "kubernetes-upgrade-726000", held for 14.210042ms
	W0307 10:13:39.152561    4324 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:13:39.152593    4324 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:13:39.152597    4324 start.go:728] Will try again in 5 seconds ...
	I0307 10:13:44.153351    4324 start.go:360] acquireMachinesLock for kubernetes-upgrade-726000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:13:44.153871    4324 start.go:364] duration metric: took 409.167µs to acquireMachinesLock for "kubernetes-upgrade-726000"
	I0307 10:13:44.154037    4324 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:13:44.154061    4324 fix.go:54] fixHost starting: 
	I0307 10:13:44.154785    4324 fix.go:112] recreateIfNeeded on kubernetes-upgrade-726000: state=Stopped err=<nil>
	W0307 10:13:44.154814    4324 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:13:44.164262    4324 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-726000" ...
	I0307 10:13:44.167505    4324 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:de:54:8b:69:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubernetes-upgrade-726000/disk.qcow2
	I0307 10:13:44.181405    4324 main.go:141] libmachine: STDOUT: 
	I0307 10:13:44.181466    4324 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:13:44.181593    4324 fix.go:56] duration metric: took 27.535958ms for fixHost
	I0307 10:13:44.181615    4324 start.go:83] releasing machines lock for "kubernetes-upgrade-726000", held for 27.720709ms
	W0307 10:13:44.181812    4324 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-726000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-726000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:13:44.189191    4324 out.go:177] 
	W0307 10:13:44.193283    4324 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:13:44.193329    4324 out.go:239] * 
	* 
	W0307 10:13:44.195914    4324 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:13:44.204243    4324 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-726000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-726000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-726000 version --output=json: exit status 1 (61.563084ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-726000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-03-07 10:13:44.281561 -0800 PST m=+2688.111347459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-726000 -n kubernetes-upgrade-726000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-726000 -n kubernetes-upgrade-726000: exit status 7 (37.451792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-726000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-726000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-726000
--- FAIL: TestKubernetesUpgrade (18.70s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.73s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18241
- KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3773455790/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.73s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.21s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18241
- KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4046769314/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (580.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3477592845 start -p stopped-upgrade-853000 --memory=2200 --vm-driver=qemu2 
E0307 10:14:13.798046    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3477592845 start -p stopped-upgrade-853000 --memory=2200 --vm-driver=qemu2 : (45.202439s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3477592845 -p stopped-upgrade-853000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3477592845 -p stopped-upgrade-853000 stop: (12.126568292s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-853000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0307 10:15:16.469882    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
E0307 10:18:19.532569    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
E0307 10:19:13.787974    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
E0307 10:20:16.459599    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-853000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m43.14646525s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-853000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-853000" primary control-plane node in "stopped-upgrade-853000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-853000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:14:46.746354    4364 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:14:46.746523    4364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:14:46.746527    4364 out.go:304] Setting ErrFile to fd 2...
	I0307 10:14:46.746530    4364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:14:46.746682    4364 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:14:46.747830    4364 out.go:298] Setting JSON to false
	I0307 10:14:46.766743    4364 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4458,"bootTime":1709830828,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:14:46.766804    4364 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:14:46.770999    4364 out.go:177] * [stopped-upgrade-853000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:14:46.777036    4364 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:14:46.777081    4364 notify.go:220] Checking for updates...
	I0307 10:14:46.784897    4364 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:14:46.788028    4364 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:14:46.791034    4364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:14:46.794005    4364 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:14:46.797025    4364 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:14:46.800309    4364 config.go:182] Loaded profile config "stopped-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:14:46.803963    4364 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0307 10:14:46.807038    4364 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:14:46.810923    4364 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 10:14:46.817999    4364 start.go:297] selected driver: qemu2
	I0307 10:14:46.818006    4364 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-853000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 10:14:46.818053    4364 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:14:46.820685    4364 cni.go:84] Creating CNI manager for ""
	I0307 10:14:46.820708    4364 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:14:46.820736    4364 start.go:340] cluster config:
	{Name:stopped-upgrade-853000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 10:14:46.820791    4364 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:14:46.822665    4364 out.go:177] * Starting "stopped-upgrade-853000" primary control-plane node in "stopped-upgrade-853000" cluster
	I0307 10:14:46.826908    4364 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0307 10:14:46.826924    4364 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0307 10:14:46.826933    4364 cache.go:56] Caching tarball of preloaded images
	I0307 10:14:46.826989    4364 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:14:46.826994    4364 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0307 10:14:46.827051    4364 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/config.json ...
	I0307 10:14:46.827341    4364 start.go:360] acquireMachinesLock for stopped-upgrade-853000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:14:46.827374    4364 start.go:364] duration metric: took 26.459µs to acquireMachinesLock for "stopped-upgrade-853000"
	I0307 10:14:46.827383    4364 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:14:46.827386    4364 fix.go:54] fixHost starting: 
	I0307 10:14:46.827502    4364 fix.go:112] recreateIfNeeded on stopped-upgrade-853000: state=Stopped err=<nil>
	W0307 10:14:46.827510    4364 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:14:46.835946    4364 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-853000" ...
	I0307 10:14:46.840054    4364 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50483-:22,hostfwd=tcp::50484-:2376,hostname=stopped-upgrade-853000 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/disk.qcow2
	I0307 10:14:46.882479    4364 main.go:141] libmachine: STDOUT: 
	I0307 10:14:46.882518    4364 main.go:141] libmachine: STDERR: 
	I0307 10:14:46.882524    4364 main.go:141] libmachine: Waiting for VM to start (ssh -p 50483 docker@127.0.0.1)...
	I0307 10:15:06.790483    4364 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/config.json ...
	I0307 10:15:06.791339    4364 machine.go:94] provisionDockerMachine start ...
	I0307 10:15:06.791528    4364 main.go:141] libmachine: Using SSH client type: native
	I0307 10:15:06.792083    4364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003eda30] 0x1003f0290 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0307 10:15:06.792100    4364 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 10:15:06.874483    4364 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0307 10:15:06.874510    4364 buildroot.go:166] provisioning hostname "stopped-upgrade-853000"
	I0307 10:15:06.874631    4364 main.go:141] libmachine: Using SSH client type: native
	I0307 10:15:06.874818    4364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003eda30] 0x1003f0290 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0307 10:15:06.874829    4364 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-853000 && echo "stopped-upgrade-853000" | sudo tee /etc/hostname
	I0307 10:15:06.948501    4364 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-853000
	
	I0307 10:15:06.948579    4364 main.go:141] libmachine: Using SSH client type: native
	I0307 10:15:06.948733    4364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003eda30] 0x1003f0290 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0307 10:15:06.948744    4364 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-853000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-853000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-853000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 10:15:07.012571    4364 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 10:15:07.012583    4364 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18241-1349/.minikube CaCertPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18241-1349/.minikube}
	I0307 10:15:07.012597    4364 buildroot.go:174] setting up certificates
	I0307 10:15:07.012602    4364 provision.go:84] configureAuth start
	I0307 10:15:07.012606    4364 provision.go:143] copyHostCerts
	I0307 10:15:07.012692    4364 exec_runner.go:144] found /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.pem, removing ...
	I0307 10:15:07.012703    4364 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.pem
	I0307 10:15:07.012816    4364 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.pem (1078 bytes)
	I0307 10:15:07.013019    4364 exec_runner.go:144] found /Users/jenkins/minikube-integration/18241-1349/.minikube/cert.pem, removing ...
	I0307 10:15:07.013023    4364 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18241-1349/.minikube/cert.pem
	I0307 10:15:07.013071    4364 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18241-1349/.minikube/cert.pem (1123 bytes)
	I0307 10:15:07.013183    4364 exec_runner.go:144] found /Users/jenkins/minikube-integration/18241-1349/.minikube/key.pem, removing ...
	I0307 10:15:07.013187    4364 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18241-1349/.minikube/key.pem
	I0307 10:15:07.013231    4364 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18241-1349/.minikube/key.pem (1679 bytes)
	I0307 10:15:07.013314    4364 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-853000 san=[127.0.0.1 localhost minikube stopped-upgrade-853000]
	I0307 10:15:07.056850    4364 provision.go:177] copyRemoteCerts
	I0307 10:15:07.056881    4364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 10:15:07.056888    4364 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/id_rsa Username:docker}
	I0307 10:15:07.088333    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 10:15:07.095062    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0307 10:15:07.101553    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 10:15:07.108657    4364 provision.go:87] duration metric: took 96.053084ms to configureAuth
	I0307 10:15:07.108665    4364 buildroot.go:189] setting minikube options for container-runtime
	I0307 10:15:07.108785    4364 config.go:182] Loaded profile config "stopped-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:15:07.108818    4364 main.go:141] libmachine: Using SSH client type: native
	I0307 10:15:07.108905    4364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003eda30] 0x1003f0290 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0307 10:15:07.108910    4364 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 10:15:07.167092    4364 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 10:15:07.167106    4364 buildroot.go:70] root file system type: tmpfs
	I0307 10:15:07.167158    4364 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 10:15:07.167210    4364 main.go:141] libmachine: Using SSH client type: native
	I0307 10:15:07.167323    4364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003eda30] 0x1003f0290 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0307 10:15:07.167357    4364 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 10:15:07.230672    4364 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 10:15:07.230731    4364 main.go:141] libmachine: Using SSH client type: native
	I0307 10:15:07.230842    4364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003eda30] 0x1003f0290 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0307 10:15:07.230852    4364 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 10:15:07.559705    4364 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 10:15:07.559719    4364 machine.go:97] duration metric: took 768.39375ms to provisionDockerMachine
	I0307 10:15:07.559726    4364 start.go:293] postStartSetup for "stopped-upgrade-853000" (driver="qemu2")
	I0307 10:15:07.559732    4364 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 10:15:07.559797    4364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 10:15:07.559805    4364 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/id_rsa Username:docker}
	I0307 10:15:07.591064    4364 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 10:15:07.592402    4364 info.go:137] Remote host: Buildroot 2021.02.12
	I0307 10:15:07.592409    4364 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18241-1349/.minikube/addons for local assets ...
	I0307 10:15:07.592492    4364 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18241-1349/.minikube/files for local assets ...
	I0307 10:15:07.592610    4364 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18241-1349/.minikube/files/etc/ssl/certs/17812.pem -> 17812.pem in /etc/ssl/certs
	I0307 10:15:07.592736    4364 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 10:15:07.595247    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/files/etc/ssl/certs/17812.pem --> /etc/ssl/certs/17812.pem (1708 bytes)
	I0307 10:15:07.601798    4364 start.go:296] duration metric: took 42.06775ms for postStartSetup
	I0307 10:15:07.601812    4364 fix.go:56] duration metric: took 20.775110917s for fixHost
	I0307 10:15:07.601845    4364 main.go:141] libmachine: Using SSH client type: native
	I0307 10:15:07.601986    4364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003eda30] 0x1003f0290 <nil>  [] 0s} localhost 50483 <nil> <nil>}
	I0307 10:15:07.601991    4364 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0307 10:15:07.657847    4364 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709835307.753017337
	
	I0307 10:15:07.657855    4364 fix.go:216] guest clock: 1709835307.753017337
	I0307 10:15:07.657860    4364 fix.go:229] Guest: 2024-03-07 10:15:07.753017337 -0800 PST Remote: 2024-03-07 10:15:07.601813 -0800 PST m=+20.889528876 (delta=151.204337ms)
	I0307 10:15:07.657870    4364 fix.go:200] guest clock delta is within tolerance: 151.204337ms
	I0307 10:15:07.657875    4364 start.go:83] releasing machines lock for "stopped-upgrade-853000", held for 20.83118425s
	I0307 10:15:07.657936    4364 ssh_runner.go:195] Run: cat /version.json
	I0307 10:15:07.657945    4364 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 10:15:07.657944    4364 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/id_rsa Username:docker}
	I0307 10:15:07.657962    4364 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/id_rsa Username:docker}
	W0307 10:15:07.658549    4364 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50483: connect: connection refused
	I0307 10:15:07.658574    4364 retry.go:31] will retry after 304.222176ms: dial tcp [::1]:50483: connect: connection refused
	W0307 10:15:08.004650    4364 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0307 10:15:08.004777    4364 ssh_runner.go:195] Run: systemctl --version
	I0307 10:15:08.008033    4364 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 10:15:08.010441    4364 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 10:15:08.010487    4364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0307 10:15:08.014711    4364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0307 10:15:08.021487    4364 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 10:15:08.021498    4364 start.go:494] detecting cgroup driver to use...
	I0307 10:15:08.021598    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:15:08.030567    4364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0307 10:15:08.033880    4364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 10:15:08.039583    4364 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 10:15:08.039641    4364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 10:15:08.044164    4364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:15:08.051423    4364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 10:15:08.054547    4364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:15:08.057602    4364 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 10:15:08.060753    4364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 10:15:08.063775    4364 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 10:15:08.066146    4364 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 10:15:08.069096    4364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:15:08.130906    4364 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 10:15:08.139456    4364 start.go:494] detecting cgroup driver to use...
	I0307 10:15:08.139527    4364 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 10:15:08.145635    4364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:15:08.157235    4364 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 10:15:08.164517    4364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:15:08.169015    4364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:15:08.173590    4364 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 10:15:08.237141    4364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:15:08.243347    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:15:08.249341    4364 ssh_runner.go:195] Run: which cri-dockerd
	I0307 10:15:08.250673    4364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 10:15:08.253714    4364 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 10:15:08.258585    4364 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 10:15:08.318628    4364 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 10:15:08.394475    4364 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 10:15:08.394540    4364 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 10:15:08.399450    4364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:15:08.459373    4364 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 10:15:09.601996    4364 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.142643125s)
	I0307 10:15:09.602053    4364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 10:15:09.606778    4364 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0307 10:15:09.612854    4364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 10:15:09.617446    4364 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 10:15:09.678724    4364 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 10:15:09.739862    4364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:15:09.803628    4364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 10:15:09.809117    4364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 10:15:09.813954    4364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:15:09.877283    4364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 10:15:09.916306    4364 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 10:15:09.916380    4364 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 10:15:09.918362    4364 start.go:562] Will wait 60s for crictl version
	I0307 10:15:09.918402    4364 ssh_runner.go:195] Run: which crictl
	I0307 10:15:09.919863    4364 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 10:15:09.935827    4364 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0307 10:15:09.935907    4364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 10:15:09.952987    4364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 10:15:09.972454    4364 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0307 10:15:09.972528    4364 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0307 10:15:09.973735    4364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 10:15:09.977614    4364 kubeadm.go:877] updating cluster {Name:stopped-upgrade-853000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0307 10:15:09.977681    4364 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0307 10:15:09.977721    4364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 10:15:09.988739    4364 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 10:15:09.988747    4364 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0307 10:15:09.988794    4364 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 10:15:09.992154    4364 ssh_runner.go:195] Run: which lz4
	I0307 10:15:09.993457    4364 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0307 10:15:09.994714    4364 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0307 10:15:09.994723    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0307 10:15:10.692895    4364 docker.go:649] duration metric: took 699.494416ms to copy over tarball
	I0307 10:15:10.692960    4364 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0307 10:15:11.989280    4364 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.296346417s)
	I0307 10:15:11.989295    4364 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0307 10:15:12.007849    4364 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 10:15:12.010719    4364 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0307 10:15:12.016041    4364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:15:12.085388    4364 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 10:15:13.674852    4364 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.589501042s)
	I0307 10:15:13.674951    4364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 10:15:13.689572    4364 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 10:15:13.689580    4364 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0307 10:15:13.689586    4364 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0307 10:15:13.695935    4364 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 10:15:13.695948    4364 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:15:13.696042    4364 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 10:15:13.696083    4364 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0307 10:15:13.696135    4364 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 10:15:13.696162    4364 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 10:15:13.696213    4364 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0307 10:15:13.696265    4364 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 10:15:13.705980    4364 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 10:15:13.706156    4364 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0307 10:15:13.706711    4364 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 10:15:13.706855    4364 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 10:15:13.706854    4364 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:15:13.706888    4364 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 10:15:13.706913    4364 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 10:15:13.706921    4364 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0307 10:15:15.624169    4364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0307 10:15:15.639268    4364 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0307 10:15:15.639303    4364 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0307 10:15:15.639361    4364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0307 10:15:15.650942    4364 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0307 10:15:15.651048    4364 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0307 10:15:15.653529    4364 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0307 10:15:15.653542    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0307 10:15:15.661490    4364 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0307 10:15:15.661499    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0307 10:15:15.687877    4364 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0307 10:15:15.728016    4364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 10:15:15.739780    4364 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0307 10:15:15.739801    4364 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 10:15:15.739856    4364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0307 10:15:15.749666    4364 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0307 10:15:15.769876    4364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0307 10:15:15.775193    4364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	W0307 10:15:15.775223    4364 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0307 10:15:15.775301    4364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0307 10:15:15.782479    4364 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0307 10:15:15.782500    4364 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0307 10:15:15.782559    4364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0307 10:15:15.792964    4364 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0307 10:15:15.792984    4364 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0307 10:15:15.792991    4364 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0307 10:15:15.793004    4364 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0307 10:15:15.793042    4364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0307 10:15:15.793042    4364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0307 10:15:15.795708    4364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0307 10:15:15.797890    4364 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0307 10:15:15.809316    4364 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0307 10:15:15.809332    4364 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0307 10:15:15.809430    4364 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0307 10:15:15.817864    4364 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0307 10:15:15.817882    4364 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0307 10:15:15.817931    4364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0307 10:15:15.818066    4364 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0307 10:15:15.818084    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0307 10:15:15.833582    4364 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0307 10:15:15.838075    4364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0307 10:15:15.859984    4364 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0307 10:15:15.860003    4364 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0307 10:15:15.860057    4364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0307 10:15:15.870430    4364 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0307 10:15:15.870445    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0307 10:15:15.876006    4364 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0307 10:15:15.912017    4364 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0307 10:15:16.549066    4364 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0307 10:15:16.549587    4364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:15:16.583975    4364 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0307 10:15:16.584016    4364 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:15:16.584116    4364 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:15:16.609037    4364 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0307 10:15:16.609181    4364 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0307 10:15:16.611080    4364 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0307 10:15:16.611093    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0307 10:15:16.640464    4364 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0307 10:15:16.640477    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0307 10:15:16.881319    4364 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0307 10:15:16.881357    4364 cache_images.go:92] duration metric: took 3.191868375s to LoadCachedImages
	W0307 10:15:16.881406    4364 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0307 10:15:16.881412    4364 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0307 10:15:16.881470    4364 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-853000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 10:15:16.881545    4364 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 10:15:16.895128    4364 cni.go:84] Creating CNI manager for ""
	I0307 10:15:16.895140    4364 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:15:16.895145    4364 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 10:15:16.895153    4364 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-853000 NodeName:stopped-upgrade-853000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 10:15:16.895234    4364 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-853000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 10:15:16.895290    4364 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0307 10:15:16.898010    4364 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 10:15:16.898037    4364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 10:15:16.901009    4364 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0307 10:15:16.906296    4364 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 10:15:16.910971    4364 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0307 10:15:16.916356    4364 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0307 10:15:16.917622    4364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 10:15:16.921090    4364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:15:16.986619    4364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 10:15:16.992939    4364 certs.go:68] Setting up /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000 for IP: 10.0.2.15
	I0307 10:15:16.992946    4364 certs.go:194] generating shared ca certs ...
	I0307 10:15:16.992955    4364 certs.go:226] acquiring lock for ca certs: {Name:mkc8d76d77d4efc8795fd6159d984855be90a666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:15:16.993114    4364 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.key
	I0307 10:15:16.993885    4364 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/proxy-client-ca.key
	I0307 10:15:16.993891    4364 certs.go:256] generating profile certs ...
	I0307 10:15:16.994253    4364 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/client.key
	I0307 10:15:16.994275    4364 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.key.be93b4ff
	I0307 10:15:16.994287    4364 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.crt.be93b4ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0307 10:15:17.061845    4364 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.crt.be93b4ff ...
	I0307 10:15:17.061859    4364 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.crt.be93b4ff: {Name:mk58f658068efa81789e4ab6ce5c845d22fe52f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:15:17.062177    4364 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.key.be93b4ff ...
	I0307 10:15:17.062182    4364 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.key.be93b4ff: {Name:mkc44d8cd384eb86a1dd6639cb29bb73d981af5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:15:17.062334    4364 certs.go:381] copying /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.crt.be93b4ff -> /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.crt
	I0307 10:15:17.062459    4364 certs.go:385] copying /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.key.be93b4ff -> /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.key
	I0307 10:15:17.062735    4364 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/proxy-client.key
	I0307 10:15:17.062950    4364 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/1781.pem (1338 bytes)
	W0307 10:15:17.063147    4364 certs.go:480] ignoring /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/1781_empty.pem, impossibly tiny 0 bytes
	I0307 10:15:17.063156    4364 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca-key.pem (1675 bytes)
	I0307 10:15:17.063174    4364 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem (1078 bytes)
	I0307 10:15:17.063193    4364 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem (1123 bytes)
	I0307 10:15:17.063210    4364 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/key.pem (1679 bytes)
	I0307 10:15:17.063250    4364 certs.go:484] found cert: /Users/jenkins/minikube-integration/18241-1349/.minikube/files/etc/ssl/certs/17812.pem (1708 bytes)
	I0307 10:15:17.063550    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 10:15:17.070258    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 10:15:17.077294    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 10:15:17.084572    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 10:15:17.091053    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0307 10:15:17.097542    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0307 10:15:17.104767    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 10:15:17.112083    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0307 10:15:17.118929    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/1781.pem --> /usr/share/ca-certificates/1781.pem (1338 bytes)
	I0307 10:15:17.125269    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/files/etc/ssl/certs/17812.pem --> /usr/share/ca-certificates/17812.pem (1708 bytes)
	I0307 10:15:17.132440    4364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18241-1349/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 10:15:17.139354    4364 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 10:15:17.144361    4364 ssh_runner.go:195] Run: openssl version
	I0307 10:15:17.146651    4364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17812.pem && ln -fs /usr/share/ca-certificates/17812.pem /etc/ssl/certs/17812.pem"
	I0307 10:15:17.149715    4364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17812.pem
	I0307 10:15:17.151157    4364 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 17:37 /usr/share/ca-certificates/17812.pem
	I0307 10:15:17.151180    4364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17812.pem
	I0307 10:15:17.152947    4364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17812.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 10:15:17.156362    4364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 10:15:17.159454    4364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:15:17.160895    4364 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 17:31 /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:15:17.160920    4364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:15:17.162904    4364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 10:15:17.165739    4364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1781.pem && ln -fs /usr/share/ca-certificates/1781.pem /etc/ssl/certs/1781.pem"
	I0307 10:15:17.169000    4364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1781.pem
	I0307 10:15:17.170479    4364 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 17:37 /usr/share/ca-certificates/1781.pem
	I0307 10:15:17.170499    4364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1781.pem
	I0307 10:15:17.172203    4364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1781.pem /etc/ssl/certs/51391683.0"
	I0307 10:15:17.175167    4364 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 10:15:17.176663    4364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0307 10:15:17.179253    4364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0307 10:15:17.181298    4364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0307 10:15:17.183414    4364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0307 10:15:17.185198    4364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0307 10:15:17.187314    4364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0307 10:15:17.189141    4364 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-853000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0307 10:15:17.189214    4364 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 10:15:17.199153    4364 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0307 10:15:17.202232    4364 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0307 10:15:17.202241    4364 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0307 10:15:17.202244    4364 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0307 10:15:17.202271    4364 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0307 10:15:17.205055    4364 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:15:17.205450    4364 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-853000" does not appear in /Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:15:17.205562    4364 kubeconfig.go:62] /Users/jenkins/minikube-integration/18241-1349/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-853000" cluster setting kubeconfig missing "stopped-upgrade-853000" context setting]
	I0307 10:15:17.205765    4364 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/kubeconfig: {Name:mkeef9e7922e618c2ac8219607b646aeaf5f61cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:15:17.206204    4364 kapi.go:59] client config for stopped-upgrade-853000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/client.key", CAFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1016e36a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 10:15:17.206657    4364 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0307 10:15:17.209205    4364 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-853000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0307 10:15:17.209212    4364 kubeadm.go:1153] stopping kube-system containers ...
	I0307 10:15:17.209248    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 10:15:17.223357    4364 docker.go:483] Stopping containers: [8c3d27435da1 84153db23698 2ed248da88ff 5b727911a818 02be06ae053e a9aa000cac5c 1390d083217d d14118a56b8e]
	I0307 10:15:17.223422    4364 ssh_runner.go:195] Run: docker stop 8c3d27435da1 84153db23698 2ed248da88ff 5b727911a818 02be06ae053e a9aa000cac5c 1390d083217d d14118a56b8e
	I0307 10:15:17.234131    4364 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0307 10:15:17.239907    4364 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 10:15:17.242546    4364 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 10:15:17.242551    4364 kubeadm.go:156] found existing configuration files:
	
	I0307 10:15:17.242572    4364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf
	I0307 10:15:17.245272    4364 kubeadm.go:162] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 10:15:17.245306    4364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 10:15:17.248219    4364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf
	I0307 10:15:17.250655    4364 kubeadm.go:162] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 10:15:17.250682    4364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 10:15:17.253302    4364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf
	I0307 10:15:17.256271    4364 kubeadm.go:162] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 10:15:17.256292    4364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 10:15:17.258879    4364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf
	I0307 10:15:17.261380    4364 kubeadm.go:162] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 10:15:17.261402    4364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 10:15:17.264304    4364 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 10:15:17.267215    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:15:17.293374    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:15:17.764070    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:15:17.872508    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:15:17.897176    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:15:17.930137    4364 api_server.go:52] waiting for apiserver process to appear ...
	I0307 10:15:17.930229    4364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:15:18.432316    4364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:15:18.932244    4364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:15:18.936356    4364 api_server.go:72] duration metric: took 1.006253958s to wait for apiserver process to appear ...
	I0307 10:15:18.936364    4364 api_server.go:88] waiting for apiserver healthz status ...
	I0307 10:15:18.936373    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:23.937843    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:23.937871    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:28.938039    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:28.938085    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:33.938235    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:33.938265    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:38.938948    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:38.938991    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:43.939392    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:43.939438    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:48.940099    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:48.940125    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:53.940879    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:53.940904    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:15:58.941872    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:15:58.941895    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:03.943118    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:03.943141    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:08.944736    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:08.944764    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:13.946769    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:13.946794    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:18.948922    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:18.949230    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:16:18.986383    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:16:18.986536    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:16:19.008427    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:16:19.008545    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:16:19.023726    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:16:19.023810    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:16:19.036326    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:16:19.036414    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:16:19.047190    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:16:19.047263    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:16:19.057726    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:16:19.057794    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:16:19.067587    4364 logs.go:276] 0 containers: []
	W0307 10:16:19.067605    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:16:19.067670    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:16:19.077602    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:16:19.077632    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:16:19.077640    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:16:19.218881    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:16:19.218892    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:16:19.246630    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:16:19.246646    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:16:19.260301    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:16:19.260311    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:16:19.272050    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:16:19.272062    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:16:19.283767    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:16:19.283778    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:16:19.310869    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:16:19.310880    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:16:19.326453    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:16:19.326463    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:16:19.337871    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:16:19.337881    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:16:19.350016    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:16:19.350029    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:16:19.371088    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:16:19.371099    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:16:19.410369    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:16:19.410378    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:16:19.421265    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:16:19.421278    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:16:19.434989    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:16:19.434999    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:16:19.439632    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:16:19.439640    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:16:19.455162    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:16:19.455173    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:16:19.470547    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:16:19.470567    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:16:21.990616    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:26.991423    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:26.991575    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:16:27.006556    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:16:27.006644    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:16:27.019894    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:16:27.019976    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:16:27.030690    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:16:27.030764    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:16:27.041297    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:16:27.041369    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:16:27.051895    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:16:27.051963    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:16:27.063008    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:16:27.063083    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:16:27.073154    4364 logs.go:276] 0 containers: []
	W0307 10:16:27.073165    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:16:27.073223    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:16:27.083641    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:16:27.083659    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:16:27.083665    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:16:27.095758    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:16:27.095770    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:16:27.108609    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:16:27.108621    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:16:27.122787    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:16:27.122797    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:16:27.136828    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:16:27.136838    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:16:27.162507    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:16:27.162522    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:16:27.204510    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:16:27.204524    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:16:27.220206    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:16:27.220219    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:16:27.231750    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:16:27.231764    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:16:27.243632    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:16:27.243642    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:16:27.260530    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:16:27.260547    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:16:27.276014    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:16:27.276031    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:16:27.293584    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:16:27.293594    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:16:27.333874    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:16:27.333888    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:16:27.359014    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:16:27.359026    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:16:27.373339    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:16:27.373356    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:16:27.378284    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:16:27.378294    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:16:29.895859    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:34.898111    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:34.898321    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:16:34.924040    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:16:34.924163    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:16:34.945763    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:16:34.945837    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:16:34.958685    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:16:34.958746    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:16:34.970252    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:16:34.970327    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:16:34.981196    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:16:34.981263    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:16:34.991388    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:16:34.991447    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:16:35.004721    4364 logs.go:276] 0 containers: []
	W0307 10:16:35.004733    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:16:35.004793    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:16:35.014837    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:16:35.014860    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:16:35.014865    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:16:35.040218    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:16:35.040227    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:16:35.052385    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:16:35.052396    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:16:35.066507    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:16:35.066518    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:16:35.081773    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:16:35.081783    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:16:35.095549    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:16:35.095563    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:16:35.113502    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:16:35.113516    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:16:35.127340    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:16:35.127350    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:16:35.138913    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:16:35.138923    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:16:35.149816    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:16:35.149827    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:16:35.188061    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:16:35.188070    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:16:35.192097    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:16:35.192107    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:16:35.225890    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:16:35.225901    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:16:35.252398    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:16:35.252409    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:16:35.270339    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:16:35.270349    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:16:35.282271    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:16:35.282286    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:16:35.295948    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:16:35.295959    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:16:37.817773    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:42.819926    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:42.820141    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:16:42.838308    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:16:42.838411    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:16:42.851946    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:16:42.852042    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:16:42.866571    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:16:42.866644    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:16:42.878287    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:16:42.878351    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:16:42.888706    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:16:42.888771    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:16:42.899570    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:16:42.899642    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:16:42.910142    4364 logs.go:276] 0 containers: []
	W0307 10:16:42.910154    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:16:42.910227    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:16:42.920496    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:16:42.920511    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:16:42.920516    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:16:42.934526    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:16:42.934537    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:16:42.950192    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:16:42.950201    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:16:42.964770    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:16:42.964784    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:16:42.975837    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:16:42.975849    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:16:42.980543    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:16:42.980550    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:16:43.005519    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:16:43.005530    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:16:43.017133    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:16:43.017144    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:16:43.028940    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:16:43.028951    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:16:43.052741    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:16:43.052757    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:16:43.092020    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:16:43.092033    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:16:43.130279    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:16:43.130293    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:16:43.143936    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:16:43.143947    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:16:43.157089    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:16:43.157099    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:16:43.170799    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:16:43.170812    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:16:43.182322    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:16:43.182335    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:16:43.199570    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:16:43.199582    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:16:45.716892    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:50.719097    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:50.719313    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:16:50.743109    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:16:50.743214    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:16:50.756801    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:16:50.756877    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:16:50.768737    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:16:50.768810    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:16:50.778576    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:16:50.778651    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:16:50.788864    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:16:50.788931    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:16:50.799141    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:16:50.799212    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:16:50.809463    4364 logs.go:276] 0 containers: []
	W0307 10:16:50.809476    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:16:50.809547    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:16:50.820431    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:16:50.820448    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:16:50.820454    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:16:50.856432    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:16:50.856446    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:16:50.870437    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:16:50.870447    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:16:50.884914    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:16:50.884924    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:16:50.895943    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:16:50.895955    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:16:50.921876    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:16:50.921889    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:16:50.960419    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:16:50.960427    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:16:50.975802    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:16:50.975815    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:16:50.998226    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:16:50.998237    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:16:51.017442    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:16:51.017452    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:16:51.036157    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:16:51.036167    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:16:51.040197    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:16:51.040210    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:16:51.052625    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:16:51.052635    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:16:51.063694    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:16:51.063704    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:16:51.079558    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:16:51.079570    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:16:51.091016    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:16:51.091029    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:16:51.104305    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:16:51.104315    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:16:53.630733    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:16:58.632964    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:16:58.633208    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:16:58.655254    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:16:58.655362    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:16:58.670653    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:16:58.670737    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:16:58.683195    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:16:58.683266    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:16:58.694082    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:16:58.694152    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:16:58.704863    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:16:58.704939    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:16:58.715316    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:16:58.715389    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:16:58.727171    4364 logs.go:276] 0 containers: []
	W0307 10:16:58.727185    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:16:58.727257    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:16:58.738088    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:16:58.738111    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:16:58.738118    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:16:58.778084    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:16:58.778097    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:16:58.790275    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:16:58.790291    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:16:58.805141    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:16:58.805152    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:16:58.817145    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:16:58.817155    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:16:58.832704    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:16:58.832718    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:16:58.837003    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:16:58.837012    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:16:58.854921    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:16:58.854931    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:16:58.879773    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:16:58.879787    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:16:58.893460    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:16:58.893473    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:16:58.908295    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:16:58.908306    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:16:58.925463    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:16:58.925473    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:16:58.966533    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:16:58.966549    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:16:58.980276    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:16:58.980290    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:16:58.991495    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:16:58.991507    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:16:59.015572    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:16:59.015588    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:16:59.027416    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:16:59.027429    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:17:01.541310    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:06.543476    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:06.543873    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:06.579102    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:17:06.579247    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:06.600009    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:17:06.600101    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:06.614597    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:17:06.614687    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:06.627043    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:17:06.627115    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:06.637332    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:17:06.637405    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:06.651109    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:17:06.651185    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:06.661487    4364 logs.go:276] 0 containers: []
	W0307 10:17:06.661497    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:06.661548    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:06.679773    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:17:06.679792    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:06.679799    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:06.685890    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:17:06.685899    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:17:06.703955    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:17:06.703967    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:17:06.715302    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:06.715313    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:17:06.754894    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:17:06.754904    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:17:06.770507    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:17:06.770517    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:17:06.803636    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:17:06.803653    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:17:06.815144    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:17:06.815158    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:06.826936    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:06.826948    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:06.861133    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:17:06.861147    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:17:06.875858    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:17:06.875869    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:17:06.889645    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:06.889661    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:06.914983    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:17:06.914992    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:17:06.926354    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:17:06.926365    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:17:06.947490    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:17:06.947501    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:17:06.958652    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:17:06.958662    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:17:06.973453    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:17:06.973464    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:17:09.486709    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:14.488978    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:14.489162    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:14.507417    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:17:14.507518    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:14.524879    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:17:14.524955    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:14.537329    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:17:14.537396    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:14.547829    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:17:14.547900    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:14.558322    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:17:14.558388    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:14.568733    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:17:14.568801    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:14.580384    4364 logs.go:276] 0 containers: []
	W0307 10:17:14.580396    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:14.580454    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:14.590913    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:17:14.590930    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:14.590936    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:14.629061    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:17:14.629075    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:17:14.643092    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:17:14.643102    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:17:14.655511    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:17:14.655520    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:17:14.674585    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:17:14.674600    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:17:14.688294    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:14.688304    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:14.711566    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:17:14.711577    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:14.723401    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:17:14.723411    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:17:14.737232    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:17:14.737241    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:17:14.749296    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:17:14.749306    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:17:14.761031    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:14.761042    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:17:14.798912    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:14.798925    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:14.803309    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:17:14.803316    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:17:14.828531    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:17:14.828542    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:17:14.845674    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:17:14.845686    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:17:14.856616    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:17:14.856628    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:17:14.871388    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:17:14.871399    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:17:17.384156    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:22.386452    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:22.386609    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:22.398625    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:17:22.398705    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:22.408942    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:17:22.409014    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:22.419300    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:17:22.419369    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:22.431392    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:17:22.431469    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:22.445748    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:17:22.445821    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:22.456044    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:17:22.456111    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:22.466190    4364 logs.go:276] 0 containers: []
	W0307 10:17:22.466202    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:22.466261    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:22.476854    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:17:22.476875    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:17:22.476881    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:17:22.488165    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:17:22.488178    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:17:22.502448    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:17:22.502458    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:17:22.517107    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:17:22.517119    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:17:22.534778    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:17:22.534788    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:17:22.548267    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:17:22.548281    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:17:22.559596    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:17:22.559606    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:22.572303    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:17:22.572315    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:17:22.590942    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:17:22.590956    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:17:22.609164    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:22.609173    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:22.633736    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:22.633752    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:22.637763    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:17:22.637769    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:17:22.648834    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:17:22.648845    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:17:22.674115    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:17:22.674135    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:17:22.689748    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:17:22.689759    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:17:22.708909    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:22.708921    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:17:22.746135    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:22.746149    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:25.283212    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:30.285367    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:30.285607    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:30.309256    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:17:30.309354    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:30.323594    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:17:30.323675    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:30.335860    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:17:30.335928    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:30.346314    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:17:30.346387    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:30.356809    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:17:30.356883    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:30.371867    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:17:30.371933    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:30.381822    4364 logs.go:276] 0 containers: []
	W0307 10:17:30.381834    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:30.381889    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:30.392655    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:17:30.392673    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:30.392680    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:30.427306    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:17:30.427317    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:17:30.438493    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:17:30.438505    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:17:30.454326    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:17:30.454338    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:30.466296    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:17:30.466306    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:17:30.487557    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:17:30.487567    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:17:30.500592    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:17:30.500608    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:17:30.513188    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:17:30.513198    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:17:30.530043    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:17:30.530053    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:17:30.541402    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:30.541413    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:17:30.579227    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:17:30.579235    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:17:30.593171    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:17:30.593186    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:17:30.607037    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:30.607047    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:30.630448    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:30.630455    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:30.634956    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:17:30.634963    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:17:30.660369    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:17:30.660379    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:17:30.673996    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:17:30.674007    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:17:33.186988    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:38.188737    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:38.188980    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:38.215822    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:17:38.215914    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:38.230480    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:17:38.230559    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:38.242218    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:17:38.242293    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:38.253364    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:17:38.253435    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:38.264369    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:17:38.264439    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:38.274963    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:17:38.275029    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:38.285404    4364 logs.go:276] 0 containers: []
	W0307 10:17:38.285414    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:38.285474    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:38.295351    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:17:38.295367    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:17:38.295372    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:17:38.311450    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:17:38.311461    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:17:38.325865    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:17:38.325876    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:17:38.346572    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:17:38.346583    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:38.358609    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:17:38.358621    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:17:38.373318    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:17:38.373329    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:17:38.388777    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:17:38.388788    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:17:38.400496    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:17:38.400508    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:17:38.412100    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:17:38.412112    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:17:38.423648    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:38.423660    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:38.427987    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:17:38.427995    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:17:38.452966    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:17:38.452976    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:17:38.467520    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:17:38.467531    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:17:38.478801    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:17:38.478812    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:17:38.489504    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:38.489520    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:38.514635    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:38.514643    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:17:38.552633    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:38.552643    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:41.090614    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:46.092774    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:46.093058    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:46.121677    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:17:46.121797    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:46.139209    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:17:46.139308    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:46.152366    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:17:46.152441    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:46.164453    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:17:46.164524    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:46.174520    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:17:46.174593    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:46.188909    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:17:46.188982    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:46.199989    4364 logs.go:276] 0 containers: []
	W0307 10:17:46.200005    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:46.200068    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:46.210775    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:17:46.210796    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:46.210801    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:46.234673    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:17:46.234683    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:17:46.253561    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:17:46.253571    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:17:46.267472    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:17:46.267482    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:17:46.278632    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:17:46.278643    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:17:46.294284    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:17:46.294296    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:17:46.310802    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:17:46.310815    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:17:46.322196    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:46.322206    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:46.359277    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:17:46.359288    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:17:46.376726    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:17:46.376737    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:46.388473    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:17:46.388484    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:17:46.402185    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:17:46.402195    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:17:46.413370    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:46.413381    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:17:46.451625    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:46.451634    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:46.455785    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:17:46.455791    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:17:46.480481    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:17:46.480491    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:17:46.495444    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:17:46.495457    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:17:49.011502    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:17:54.013717    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:17:54.013981    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:17:54.034361    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:17:54.034457    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:17:54.048766    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:17:54.048850    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:17:54.063886    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:17:54.063952    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:17:54.074798    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:17:54.074890    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:17:54.085745    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:17:54.085812    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:17:54.096662    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:17:54.096738    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:17:54.106490    4364 logs.go:276] 0 containers: []
	W0307 10:17:54.106501    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:17:54.106562    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:17:54.117021    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:17:54.117045    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:17:54.117051    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:17:54.131508    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:17:54.131519    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:17:54.142260    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:17:54.142272    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:17:54.156859    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:17:54.156870    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:17:54.168965    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:17:54.168975    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:17:54.180150    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:17:54.180162    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:17:54.218877    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:17:54.218886    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:17:54.254586    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:17:54.254596    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:17:54.280159    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:17:54.280171    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:17:54.294409    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:17:54.294420    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:17:54.309612    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:17:54.309626    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:17:54.333223    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:17:54.333231    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:17:54.344767    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:17:54.344777    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:17:54.362190    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:17:54.362201    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:17:54.374055    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:17:54.374066    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:17:54.378289    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:17:54.378297    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:17:54.392045    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:17:54.392056    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:17:56.908019    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:01.910535    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:01.910717    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:01.931262    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:18:01.931374    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:01.944300    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:18:01.944372    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:01.955467    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:18:01.955537    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:01.965607    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:18:01.965686    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:01.975699    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:18:01.975758    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:01.992819    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:18:01.992885    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:02.002891    4364 logs.go:276] 0 containers: []
	W0307 10:18:02.002905    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:02.002969    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:02.013717    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:18:02.013736    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:18:02.013741    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:18:02.036468    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:18:02.036479    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:18:02.047651    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:18:02.047662    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:18:02.062315    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:18:02.062330    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:18:02.075766    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:02.075777    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:02.111769    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:18:02.111783    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:18:02.132019    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:18:02.132028    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:18:02.143865    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:18:02.143877    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:18:02.155425    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:02.155436    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:02.159748    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:18:02.159759    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:18:02.185066    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:18:02.185077    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:18:02.199701    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:18:02.199713    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:02.211910    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:02.211920    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:02.235054    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:02.235063    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:18:02.273180    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:18:02.273198    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:18:02.285799    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:18:02.285811    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:18:02.301097    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:18:02.301110    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:18:04.821266    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:09.823782    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:09.824092    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:09.853395    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:18:09.853521    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:09.872837    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:18:09.872930    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:09.886353    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:18:09.886424    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:09.898898    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:18:09.898973    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:09.909277    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:18:09.909348    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:09.919978    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:18:09.920047    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:09.932026    4364 logs.go:276] 0 containers: []
	W0307 10:18:09.932039    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:09.932102    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:09.942396    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:18:09.942414    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:18:09.942419    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:18:09.963800    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:18:09.963810    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:18:09.975846    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:09.975857    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:09.999893    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:18:09.999902    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:10.011561    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:18:10.011572    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:18:10.025729    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:18:10.025738    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:18:10.040089    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:18:10.040098    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:18:10.051633    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:18:10.051644    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:18:10.063768    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:18:10.063779    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:18:10.075079    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:10.075092    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:10.079702    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:10.079709    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:10.115625    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:18:10.115636    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:18:10.127634    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:18:10.127646    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:18:10.144591    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:10.144601    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:18:10.183341    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:18:10.183355    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:18:10.208607    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:18:10.208619    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:18:10.225446    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:18:10.225457    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:18:12.745238    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:17.747308    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:17.747547    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:17.773833    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:18:17.773990    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:17.791174    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:18:17.791272    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:17.804694    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:18:17.804763    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:17.816216    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:18:17.816280    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:17.830402    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:18:17.830469    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:17.840672    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:18:17.840733    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:17.850466    4364 logs.go:276] 0 containers: []
	W0307 10:18:17.850477    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:17.850527    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:17.861111    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:18:17.861128    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:17.861134    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:18:17.900213    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:17.900223    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:17.945087    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:18:17.945100    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:18:17.958994    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:18:17.959007    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:18:17.970580    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:17.970591    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:17.974642    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:18:17.974649    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:18:18.002954    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:18:18.002964    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:18:18.018386    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:18:18.018396    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:18:18.033252    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:18:18.033265    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:18:18.044844    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:18:18.044855    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:18:18.062087    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:18:18.062098    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:18:18.073623    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:18.073633    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:18.098051    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:18:18.098061    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:18:18.111443    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:18:18.111453    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:18:18.122805    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:18:18.122817    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:18:18.136770    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:18:18.136781    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:18:18.147797    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:18:18.147809    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:20.661933    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:25.664417    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:25.664535    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:25.683513    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:18:25.683610    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:25.699250    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:18:25.699327    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:25.710861    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:18:25.710932    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:25.722682    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:18:25.722764    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:25.733668    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:18:25.733738    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:25.744002    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:18:25.744071    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:25.754137    4364 logs.go:276] 0 containers: []
	W0307 10:18:25.754148    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:25.754208    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:25.764592    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:18:25.764610    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:25.764616    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:25.768733    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:18:25.768741    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:18:25.783083    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:18:25.783094    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:25.794365    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:25.794377    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:18:25.830824    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:18:25.830832    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:18:25.844360    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:18:25.844371    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:18:25.855949    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:18:25.855961    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:18:25.869085    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:18:25.869094    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:18:25.885899    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:18:25.885909    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:18:25.897461    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:25.897475    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:25.934103    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:18:25.934114    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:18:25.960042    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:18:25.960053    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:18:25.980520    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:18:25.980531    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:18:25.991807    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:18:25.991817    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:18:26.012763    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:18:26.012778    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:18:26.030715    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:18:26.030726    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:18:26.041547    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:26.041558    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:28.565967    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:33.568064    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:33.568303    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:33.595091    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:18:33.595217    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:33.613173    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:18:33.613258    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:33.626647    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:18:33.626726    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:33.641784    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:18:33.641852    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:33.652045    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:18:33.652113    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:33.662259    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:18:33.662335    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:33.672362    4364 logs.go:276] 0 containers: []
	W0307 10:18:33.672375    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:33.672430    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:33.683019    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:18:33.683034    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:18:33.683039    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:18:33.697295    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:18:33.697305    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:18:33.709449    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:33.709459    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:33.713978    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:18:33.713984    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:18:33.728078    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:18:33.728091    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:18:33.739086    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:18:33.739098    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:18:33.753868    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:18:33.753878    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:33.767586    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:33.767596    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:33.802116    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:18:33.802127    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:18:33.816122    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:18:33.816133    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:18:33.828015    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:18:33.828027    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:18:33.839268    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:33.839281    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:33.862990    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:33.862998    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:18:33.901534    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:18:33.901553    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:18:33.928697    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:18:33.928715    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:18:33.946455    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:18:33.946465    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:18:33.959750    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:18:33.959762    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:18:36.473266    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:41.475387    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:41.475533    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:41.486900    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:18:41.486977    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:41.502077    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:18:41.502147    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:41.512625    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:18:41.512689    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:41.524488    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:18:41.524559    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:41.534526    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:18:41.534592    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:41.545241    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:18:41.545299    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:41.555228    4364 logs.go:276] 0 containers: []
	W0307 10:18:41.555245    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:41.555305    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:41.565906    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:18:41.565924    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:41.565929    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:18:41.605449    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:18:41.605463    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:18:41.619497    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:18:41.619508    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:18:41.630915    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:18:41.630928    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:18:41.645736    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:41.645747    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:41.668094    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:18:41.668101    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:41.679316    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:41.679327    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:41.683393    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:18:41.683400    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:18:41.694570    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:18:41.694580    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:18:41.706475    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:18:41.706488    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:18:41.717526    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:41.717537    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:41.752674    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:18:41.752687    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:18:41.766366    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:18:41.766376    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:18:41.791881    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:18:41.791891    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:18:41.806946    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:18:41.806956    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:18:41.825050    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:18:41.825060    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:18:41.843452    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:18:41.843462    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:18:44.356944    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:49.359153    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:49.359383    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:49.382343    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:18:49.382450    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:49.398160    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:18:49.398243    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:49.414445    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:18:49.414518    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:49.425348    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:18:49.425416    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:49.435622    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:18:49.435692    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:49.448368    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:18:49.448444    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:49.458289    4364 logs.go:276] 0 containers: []
	W0307 10:18:49.458303    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:49.458362    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:49.469017    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:18:49.469036    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:18:49.469042    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:18:49.483858    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:18:49.483868    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:18:49.499000    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:18:49.499011    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:18:49.510897    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:18:49.510909    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:18:49.526010    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:18:49.526021    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:18:49.550705    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:18:49.550716    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:18:49.564754    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:18:49.564764    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:18:49.580859    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:49.580869    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:49.584890    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:49.584899    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:49.620134    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:18:49.620144    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:18:49.633499    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:18:49.633509    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:49.645632    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:49.645646    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:49.668547    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:49.668554    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:18:49.706474    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:18:49.706482    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:18:49.724555    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:18:49.724565    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:18:49.736507    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:18:49.736519    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:18:49.748711    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:18:49.748722    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:18:52.267379    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:18:57.269323    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:18:57.269668    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:18:57.305069    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:18:57.305192    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:18:57.324615    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:18:57.324700    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:18:57.339010    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:18:57.339087    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:18:57.350784    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:18:57.350854    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:18:57.361449    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:18:57.361525    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:18:57.371808    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:18:57.371880    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:18:57.387669    4364 logs.go:276] 0 containers: []
	W0307 10:18:57.387681    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:18:57.387740    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:18:57.398345    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:18:57.398362    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:18:57.398369    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:18:57.413376    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:18:57.413388    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:18:57.424326    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:18:57.424336    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:18:57.436660    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:18:57.436672    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:18:57.447985    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:18:57.447997    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:18:57.464779    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:18:57.464793    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:18:57.480408    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:18:57.480418    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:18:57.494219    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:18:57.494230    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:18:57.512504    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:18:57.512514    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:18:57.526557    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:18:57.526568    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:18:57.551966    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:18:57.551977    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:18:57.565610    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:18:57.565621    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:18:57.577182    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:18:57.577191    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:18:57.581827    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:18:57.581837    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:18:57.617849    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:18:57.617864    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:18:57.641593    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:18:57.641603    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:18:57.679837    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:18:57.679846    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:19:00.196273    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:05.198606    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:05.198960    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:19:05.228455    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:19:05.228589    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:19:05.247816    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:19:05.247909    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:19:05.262103    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:19:05.262184    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:19:05.273602    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:19:05.273676    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:19:05.288316    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:19:05.288390    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:19:05.299153    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:19:05.299231    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:19:05.309827    4364 logs.go:276] 0 containers: []
	W0307 10:19:05.309844    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:19:05.309904    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:19:05.325065    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:19:05.325084    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:19:05.325089    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:19:05.362685    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:19:05.362697    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:19:05.392313    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:19:05.392324    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:19:05.406081    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:19:05.406092    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:19:05.418180    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:19:05.418192    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:19:05.431813    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:19:05.431822    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:19:05.445919    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:19:05.445930    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:19:05.462330    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:19:05.462341    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:19:05.473526    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:19:05.473541    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:19:05.485492    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:19:05.485502    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:19:05.498978    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:19:05.498988    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:19:05.516578    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:19:05.516588    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:19:05.541447    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:19:05.541469    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:19:05.547953    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:19:05.547966    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:19:05.575487    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:19:05.575498    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:19:05.610078    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:19:05.610090    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:19:05.625044    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:19:05.625055    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:19:08.138097    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:13.140308    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:13.140505    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:19:13.152409    4364 logs.go:276] 2 containers: [9315e04db43f 1390d083217d]
	I0307 10:19:13.152489    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:19:13.163877    4364 logs.go:276] 2 containers: [e6eecfc92195 2ed248da88ff]
	I0307 10:19:13.163943    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:19:13.174908    4364 logs.go:276] 1 containers: [303b09d3c11e]
	I0307 10:19:13.174981    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:19:13.186063    4364 logs.go:276] 2 containers: [6ce91a640ee6 5b727911a818]
	I0307 10:19:13.186133    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:19:13.196630    4364 logs.go:276] 1 containers: [c3ccf7db5189]
	I0307 10:19:13.196694    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:19:13.207266    4364 logs.go:276] 2 containers: [2933b41a401d 8c3d27435da1]
	I0307 10:19:13.207340    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:19:13.217255    4364 logs.go:276] 0 containers: []
	W0307 10:19:13.217268    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:19:13.217325    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:19:13.227612    4364 logs.go:276] 2 containers: [1635aeeacb44 a8b82e7374e2]
	I0307 10:19:13.227627    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:19:13.227633    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:19:13.232097    4364 logs.go:123] Gathering logs for kube-apiserver [9315e04db43f] ...
	I0307 10:19:13.232102    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9315e04db43f"
	I0307 10:19:13.246220    4364 logs.go:123] Gathering logs for kube-scheduler [5b727911a818] ...
	I0307 10:19:13.246230    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b727911a818"
	I0307 10:19:13.261062    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:19:13.261073    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:19:13.282906    4364 logs.go:123] Gathering logs for kube-apiserver [1390d083217d] ...
	I0307 10:19:13.282921    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1390d083217d"
	I0307 10:19:13.308183    4364 logs.go:123] Gathering logs for storage-provisioner [1635aeeacb44] ...
	I0307 10:19:13.308195    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1635aeeacb44"
	I0307 10:19:13.319828    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:19:13.319838    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:19:13.358207    4364 logs.go:123] Gathering logs for kube-controller-manager [2933b41a401d] ...
	I0307 10:19:13.358218    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2933b41a401d"
	I0307 10:19:13.376679    4364 logs.go:123] Gathering logs for kube-controller-manager [8c3d27435da1] ...
	I0307 10:19:13.376690    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d27435da1"
	I0307 10:19:13.390287    4364 logs.go:123] Gathering logs for storage-provisioner [a8b82e7374e2] ...
	I0307 10:19:13.390298    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b82e7374e2"
	I0307 10:19:13.404112    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:19:13.404122    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:19:13.444926    4364 logs.go:123] Gathering logs for etcd [e6eecfc92195] ...
	I0307 10:19:13.444938    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6eecfc92195"
	I0307 10:19:13.459054    4364 logs.go:123] Gathering logs for etcd [2ed248da88ff] ...
	I0307 10:19:13.459066    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed248da88ff"
	I0307 10:19:13.474519    4364 logs.go:123] Gathering logs for coredns [303b09d3c11e] ...
	I0307 10:19:13.474530    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 303b09d3c11e"
	I0307 10:19:13.486040    4364 logs.go:123] Gathering logs for kube-scheduler [6ce91a640ee6] ...
	I0307 10:19:13.486054    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce91a640ee6"
	I0307 10:19:13.497564    4364 logs.go:123] Gathering logs for kube-proxy [c3ccf7db5189] ...
	I0307 10:19:13.497575    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ccf7db5189"
	I0307 10:19:13.514032    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:19:13.514044    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:19:16.027949    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:21.030184    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:21.030305    4364 kubeadm.go:591] duration metric: took 4m3.836076167s to restartPrimaryControlPlane
	W0307 10:19:21.030421    4364 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0307 10:19:21.030468    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0307 10:19:22.123367    4364 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.092919292s)
	I0307 10:19:22.123431    4364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 10:19:22.128194    4364 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 10:19:22.131024    4364 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 10:19:22.133536    4364 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 10:19:22.133543    4364 kubeadm.go:156] found existing configuration files:
	
	I0307 10:19:22.133567    4364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf
	I0307 10:19:22.136305    4364 kubeadm.go:162] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 10:19:22.136344    4364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 10:19:22.139431    4364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf
	I0307 10:19:22.142157    4364 kubeadm.go:162] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 10:19:22.142182    4364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 10:19:22.144788    4364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf
	I0307 10:19:22.147831    4364 kubeadm.go:162] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 10:19:22.147855    4364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 10:19:22.150653    4364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf
	I0307 10:19:22.153225    4364 kubeadm.go:162] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 10:19:22.153247    4364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 10:19:22.156491    4364 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 10:19:22.174132    4364 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0307 10:19:22.174165    4364 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 10:19:22.222800    4364 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 10:19:22.222860    4364 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 10:19:22.222909    4364 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 10:19:22.271411    4364 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 10:19:22.275990    4364 out.go:204]   - Generating certificates and keys ...
	I0307 10:19:22.276067    4364 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 10:19:22.276108    4364 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 10:19:22.276157    4364 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0307 10:19:22.276195    4364 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0307 10:19:22.276233    4364 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0307 10:19:22.276261    4364 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0307 10:19:22.276293    4364 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0307 10:19:22.276331    4364 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0307 10:19:22.276408    4364 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0307 10:19:22.276455    4364 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0307 10:19:22.276479    4364 kubeadm.go:309] [certs] Using the existing "sa" key
	I0307 10:19:22.276523    4364 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 10:19:22.316617    4364 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 10:19:22.390999    4364 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 10:19:22.437089    4364 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 10:19:22.617118    4364 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 10:19:22.646757    4364 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 10:19:22.647121    4364 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 10:19:22.647151    4364 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 10:19:22.716160    4364 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 10:19:22.723639    4364 out.go:204]   - Booting up control plane ...
	I0307 10:19:22.723689    4364 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 10:19:22.723732    4364 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 10:19:22.723767    4364 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 10:19:22.723814    4364 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 10:19:22.723896    4364 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 10:19:27.224543    4364 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501320 seconds
	I0307 10:19:27.224621    4364 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 10:19:27.228461    4364 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 10:19:27.739336    4364 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 10:19:27.739711    4364 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-853000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 10:19:28.243917    4364 kubeadm.go:309] [bootstrap-token] Using token: rpjmeh.3x67i5b5l73s4022
	I0307 10:19:28.247695    4364 out.go:204]   - Configuring RBAC rules ...
	I0307 10:19:28.247773    4364 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 10:19:28.249766    4364 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 10:19:28.255361    4364 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 10:19:28.256217    4364 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 10:19:28.256946    4364 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 10:19:28.257789    4364 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 10:19:28.260676    4364 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 10:19:28.404899    4364 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 10:19:28.653186    4364 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 10:19:28.653747    4364 kubeadm.go:309] 
	I0307 10:19:28.653790    4364 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 10:19:28.653803    4364 kubeadm.go:309] 
	I0307 10:19:28.653846    4364 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 10:19:28.653851    4364 kubeadm.go:309] 
	I0307 10:19:28.653864    4364 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 10:19:28.653901    4364 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 10:19:28.653930    4364 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 10:19:28.653934    4364 kubeadm.go:309] 
	I0307 10:19:28.653968    4364 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 10:19:28.653974    4364 kubeadm.go:309] 
	I0307 10:19:28.654005    4364 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 10:19:28.654008    4364 kubeadm.go:309] 
	I0307 10:19:28.654033    4364 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 10:19:28.654079    4364 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 10:19:28.654124    4364 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 10:19:28.654128    4364 kubeadm.go:309] 
	I0307 10:19:28.654184    4364 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 10:19:28.654239    4364 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 10:19:28.654242    4364 kubeadm.go:309] 
	I0307 10:19:28.654298    4364 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token rpjmeh.3x67i5b5l73s4022 \
	I0307 10:19:28.654361    4364 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4f3f709732e35797580c9b8d11f82ef6c52734bdc7940106dd5836141654d720 \
	I0307 10:19:28.654375    4364 kubeadm.go:309] 	--control-plane 
	I0307 10:19:28.654380    4364 kubeadm.go:309] 
	I0307 10:19:28.654421    4364 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 10:19:28.654424    4364 kubeadm.go:309] 
	I0307 10:19:28.654479    4364 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token rpjmeh.3x67i5b5l73s4022 \
	I0307 10:19:28.654537    4364 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4f3f709732e35797580c9b8d11f82ef6c52734bdc7940106dd5836141654d720 
	I0307 10:19:28.654649    4364 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 10:19:28.654658    4364 cni.go:84] Creating CNI manager for ""
	I0307 10:19:28.654666    4364 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:19:28.659306    4364 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0307 10:19:28.666327    4364 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0307 10:19:28.669430    4364 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0307 10:19:28.674419    4364 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 10:19:28.674460    4364 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 10:19:28.674486    4364 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-853000 minikube.k8s.io/updated_at=2024_03_07T10_19_28_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c2be9b5c96c7962d271300acacf405d2402b272f minikube.k8s.io/name=stopped-upgrade-853000 minikube.k8s.io/primary=true
	I0307 10:19:28.714941    4364 kubeadm.go:1106] duration metric: took 40.516833ms to wait for elevateKubeSystemPrivileges
	I0307 10:19:28.714946    4364 ops.go:34] apiserver oom_adj: -16
	W0307 10:19:28.714964    4364 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 10:19:28.714967    4364 kubeadm.go:393] duration metric: took 4m11.53410675s to StartCluster
	I0307 10:19:28.714976    4364 settings.go:142] acquiring lock: {Name:mke72688bb63f8128eac153bbf90929d78ec9d29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:19:28.715052    4364 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:19:28.715446    4364 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/kubeconfig: {Name:mkeef9e7922e618c2ac8219607b646aeaf5f61cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:19:28.715633    4364 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:19:28.719310    4364 out.go:177] * Verifying Kubernetes components...
	I0307 10:19:28.715701    4364 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 10:19:28.715815    4364 config.go:182] Loaded profile config "stopped-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:19:28.727252    4364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:19:28.727254    4364 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-853000"
	I0307 10:19:28.727257    4364 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-853000"
	I0307 10:19:28.727270    4364 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-853000"
	W0307 10:19:28.727273    4364 addons.go:243] addon storage-provisioner should already be in state true
	I0307 10:19:28.727273    4364 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-853000"
	I0307 10:19:28.727285    4364 host.go:66] Checking if "stopped-upgrade-853000" exists ...
	I0307 10:19:28.728557    4364 kapi.go:59] client config for stopped-upgrade-853000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/stopped-upgrade-853000/client.key", CAFile:"/Users/jenkins/minikube-integration/18241-1349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1016e36a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 10:19:28.728680    4364 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-853000"
	W0307 10:19:28.728685    4364 addons.go:243] addon default-storageclass should already be in state true
	I0307 10:19:28.728697    4364 host.go:66] Checking if "stopped-upgrade-853000" exists ...
	I0307 10:19:28.733269    4364 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:19:28.737380    4364 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 10:19:28.737397    4364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 10:19:28.737411    4364 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/id_rsa Username:docker}
	I0307 10:19:28.738358    4364 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 10:19:28.738363    4364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 10:19:28.738368    4364 sshutil.go:53] new ssh client: &{IP:localhost Port:50483 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/stopped-upgrade-853000/id_rsa Username:docker}
	I0307 10:19:28.802296    4364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 10:19:28.808079    4364 api_server.go:52] waiting for apiserver process to appear ...
	I0307 10:19:28.808150    4364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:19:28.812500    4364 api_server.go:72] duration metric: took 96.859417ms to wait for apiserver process to appear ...
	I0307 10:19:28.812508    4364 api_server.go:88] waiting for apiserver healthz status ...
	I0307 10:19:28.812515    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:28.818953    4364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 10:19:28.863481    4364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 10:19:33.814495    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:33.814526    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:38.814655    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:38.814696    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:43.814879    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:43.814914    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:48.815259    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:48.815295    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:53.815743    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:53.815773    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:19:58.816711    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:19:58.816763    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0307 10:19:59.166567    4364 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0307 10:19:59.171715    4364 out.go:177] * Enabled addons: storage-provisioner
	I0307 10:19:59.182688    4364 addons.go:505] duration metric: took 30.468028875s for enable addons: enabled=[storage-provisioner]
	I0307 10:20:03.817736    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:20:03.817780    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:20:08.819244    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:20:08.819303    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:20:13.821093    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:20:13.821123    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:20:18.823143    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:20:18.823169    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:20:23.823776    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:20:23.823806    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:20:28.825843    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:20:28.826037    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:20:28.858511    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:20:28.858596    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:20:28.883168    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:20:28.883245    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:20:28.895128    4364 logs.go:276] 2 containers: [8c62957ec4e6 418ffa66d7e1]
	I0307 10:20:28.895208    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:20:28.910103    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:20:28.910174    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:20:28.920723    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:20:28.920794    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:20:28.931116    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:20:28.931181    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:20:28.940979    4364 logs.go:276] 0 containers: []
	W0307 10:20:28.940993    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:20:28.941060    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:20:28.951106    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:20:28.951120    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:20:28.951125    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:20:28.973050    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:20:28.973065    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:20:28.984986    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:20:28.984998    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:20:29.008507    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:20:29.008519    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:20:29.021051    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:20:29.021064    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:20:29.039197    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:20:29.039208    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:20:29.050966    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:20:29.050977    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:20:29.085213    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:20:29.085229    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:20:29.090264    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:20:29.090271    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:20:29.126884    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:20:29.126896    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:20:29.141681    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:20:29.141695    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:20:29.155657    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:20:29.155668    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:20:29.173469    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:20:29.173482    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:20:31.687440    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:20:36.689565    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:20:36.689690    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:20:36.702158    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:20:36.702238    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:20:36.713141    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:20:36.713212    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:20:36.723924    4364 logs.go:276] 2 containers: [8c62957ec4e6 418ffa66d7e1]
	I0307 10:20:36.723993    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:20:36.734223    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:20:36.734290    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:20:36.745308    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:20:36.745383    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:20:36.755691    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:20:36.755760    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:20:36.765950    4364 logs.go:276] 0 containers: []
	W0307 10:20:36.765965    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:20:36.766029    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:20:36.776001    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:20:36.776016    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:20:36.776022    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:20:36.780149    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:20:36.780157    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:20:36.815284    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:20:36.815295    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:20:36.831545    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:20:36.831554    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:20:36.845941    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:20:36.845950    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:20:36.857290    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:20:36.857300    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:20:36.882490    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:20:36.882497    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:20:36.893956    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:20:36.893971    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:20:36.929108    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:20:36.929119    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:20:36.940822    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:20:36.940833    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:20:36.952523    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:20:36.952532    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:20:36.965244    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:20:36.965254    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:20:36.982620    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:20:36.982631    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:20:39.502692    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:20:44.504778    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:20:44.504976    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:20:44.520791    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:20:44.520880    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:20:44.533181    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:20:44.533240    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:20:44.543734    4364 logs.go:276] 2 containers: [8c62957ec4e6 418ffa66d7e1]
	I0307 10:20:44.543805    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:20:44.553921    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:20:44.553999    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:20:44.564661    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:20:44.564745    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:20:44.579038    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:20:44.579106    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:20:44.588762    4364 logs.go:276] 0 containers: []
	W0307 10:20:44.588774    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:20:44.588831    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:20:44.599307    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:20:44.599322    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:20:44.599329    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:20:44.613702    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:20:44.613713    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:20:44.627775    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:20:44.627786    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:20:44.639358    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:20:44.639367    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:20:44.653728    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:20:44.653741    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:20:44.666043    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:20:44.666054    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:20:44.677311    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:20:44.677321    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:20:44.712976    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:20:44.712989    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:20:44.717980    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:20:44.717987    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:20:44.729101    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:20:44.729112    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:20:44.746362    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:20:44.746371    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:20:44.770012    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:20:44.770019    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:20:44.781491    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:20:44.781502    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:20:47.317146    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:20:52.318401    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:20:52.318634    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:20:52.335025    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:20:52.335112    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:20:52.347425    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:20:52.347502    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:20:52.358260    4364 logs.go:276] 2 containers: [8c62957ec4e6 418ffa66d7e1]
	I0307 10:20:52.358327    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:20:52.369387    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:20:52.369458    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:20:52.380442    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:20:52.380520    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:20:52.392714    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:20:52.392788    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:20:52.405804    4364 logs.go:276] 0 containers: []
	W0307 10:20:52.405817    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:20:52.405878    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:20:52.420799    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:20:52.420832    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:20:52.420846    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:20:52.448460    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:20:52.448478    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:20:52.469771    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:20:52.469835    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:20:52.513354    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:20:52.513369    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:20:52.552574    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:20:52.552589    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:20:52.586930    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:20:52.586945    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:20:52.632778    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:20:52.632811    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:20:52.650933    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:20:52.650944    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:20:52.693614    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:20:52.693627    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:20:52.715984    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:20:52.715997    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:20:52.734233    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:20:52.734247    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:20:52.752908    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:20:52.752920    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:20:52.764915    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:20:52.764924    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:20:55.291164    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:21:00.293765    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:21:00.294166    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:21:00.338235    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:21:00.338330    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:21:00.358631    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:21:00.358715    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:21:00.373723    4364 logs.go:276] 2 containers: [8c62957ec4e6 418ffa66d7e1]
	I0307 10:21:00.373816    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:21:00.387567    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:21:00.387653    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:21:00.399823    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:21:00.399892    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:21:00.412430    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:21:00.412500    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:21:00.424245    4364 logs.go:276] 0 containers: []
	W0307 10:21:00.424258    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:21:00.424319    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:21:00.436633    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:21:00.436648    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:21:00.436655    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:21:00.450793    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:21:00.450805    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:21:00.463861    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:21:00.463876    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:21:00.477436    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:21:00.477452    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:21:00.499525    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:21:00.499546    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:21:00.514242    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:21:00.514253    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:21:00.538179    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:21:00.538188    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:21:00.551347    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:21:00.551358    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:21:00.566069    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:21:00.566079    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:21:00.570602    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:21:00.570609    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:21:00.613078    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:21:00.613087    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:21:00.627655    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:21:00.627664    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:21:00.642579    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:21:00.642589    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:21:03.178521    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:21:08.180635    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:21:08.181002    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:21:08.215559    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:21:08.215690    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:21:08.235460    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:21:08.235546    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:21:08.249636    4364 logs.go:276] 2 containers: [8c62957ec4e6 418ffa66d7e1]
	I0307 10:21:08.249710    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:21:08.266999    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:21:08.267058    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:21:08.277012    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:21:08.277080    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:21:08.287804    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:21:08.287860    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:21:08.298444    4364 logs.go:276] 0 containers: []
	W0307 10:21:08.298454    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:21:08.298501    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:21:08.308507    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:21:08.308522    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:21:08.308528    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:21:08.344219    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:21:08.344228    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:21:08.348661    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:21:08.348670    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:21:08.382483    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:21:08.382495    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:21:08.396660    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:21:08.396671    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:21:08.411449    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:21:08.411461    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:21:08.429258    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:21:08.429269    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:21:08.440791    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:21:08.440803    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:21:08.455414    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:21:08.455425    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:21:08.467401    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:21:08.467413    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:21:08.479278    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:21:08.479288    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:21:08.496749    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:21:08.496760    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:21:08.511310    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:21:08.511322    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:21:11.034543    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:21:16.035133    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:21:16.035512    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:21:16.078131    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:21:16.078256    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:21:16.095975    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:21:16.096054    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:21:16.109857    4364 logs.go:276] 2 containers: [8c62957ec4e6 418ffa66d7e1]
	I0307 10:21:16.109936    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:21:16.121551    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:21:16.121621    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:21:16.132004    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:21:16.132074    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:21:16.142761    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:21:16.142830    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:21:16.152416    4364 logs.go:276] 0 containers: []
	W0307 10:21:16.152430    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:21:16.152486    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:21:16.162709    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:21:16.162721    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:21:16.162727    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:21:16.196800    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:21:16.196810    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:21:16.211606    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:21:16.211620    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:21:16.223243    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:21:16.223256    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:21:16.234366    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:21:16.234379    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:21:16.258631    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:21:16.258639    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:21:16.269591    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:21:16.269602    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:21:16.303773    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:21:16.303781    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:21:16.308106    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:21:16.308112    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:21:16.321868    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:21:16.321876    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:21:16.333349    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:21:16.333360    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:21:16.348957    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:21:16.348967    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:21:16.360754    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:21:16.360765    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:21:18.894828    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:21:23.896874    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:21:23.897068    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:21:23.915260    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:21:23.915333    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:21:23.925877    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:21:23.925950    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:21:23.937160    4364 logs.go:276] 2 containers: [8c62957ec4e6 418ffa66d7e1]
	I0307 10:21:23.937231    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:21:23.947833    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:21:23.947894    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:21:23.962915    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:21:23.962987    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:21:23.974031    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:21:23.974097    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:21:23.984480    4364 logs.go:276] 0 containers: []
	W0307 10:21:23.984493    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:21:23.984559    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:21:23.994683    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:21:23.994697    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:21:23.994702    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:21:24.020560    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:21:24.020572    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:21:24.045279    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:21:24.045291    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:21:24.056627    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:21:24.056639    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:21:24.061255    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:21:24.061263    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:21:24.096082    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:21:24.096095    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:21:24.118712    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:21:24.118722    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:21:24.130680    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:21:24.130691    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:21:24.147485    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:21:24.147499    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:21:24.159340    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:21:24.159351    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:21:24.193675    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:21:24.193690    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:21:24.208046    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:21:24.208058    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:21:24.222639    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:21:24.222651    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:21:26.735413    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:21:31.735762    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:21:31.735970    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:21:31.759743    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:21:31.759852    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:21:31.778613    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:21:31.778697    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:21:31.791434    4364 logs.go:276] 2 containers: [8c62957ec4e6 418ffa66d7e1]
	I0307 10:21:31.791511    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:21:31.801964    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:21:31.802031    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:21:31.811854    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:21:31.811915    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:21:31.823639    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:21:31.823719    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:21:31.834956    4364 logs.go:276] 0 containers: []
	W0307 10:21:31.834970    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:21:31.835027    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:21:31.845660    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:21:31.845677    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:21:31.845682    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:21:31.857082    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:21:31.857091    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:21:31.871059    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:21:31.871071    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:21:31.882776    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:21:31.882789    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:21:31.907230    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:21:31.907238    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:21:31.918472    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:21:31.918482    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:21:31.935573    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:21:31.935584    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:21:31.970391    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:21:31.970401    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:21:31.974544    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:21:31.974552    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:21:32.009047    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:21:32.009058    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:21:32.023189    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:21:32.023197    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:21:32.036477    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:21:32.036487    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:21:32.048699    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:21:32.048713    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:21:34.572564    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:21:39.574275    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:21:39.574761    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:21:39.613994    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:21:39.614136    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:21:39.635335    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:21:39.635444    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:21:39.651160    4364 logs.go:276] 2 containers: [8c62957ec4e6 418ffa66d7e1]
	I0307 10:21:39.651235    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:21:39.663219    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:21:39.663283    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:21:39.674152    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:21:39.674226    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:21:39.684140    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:21:39.684200    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:21:39.694512    4364 logs.go:276] 0 containers: []
	W0307 10:21:39.694524    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:21:39.694585    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:21:39.704987    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:21:39.705001    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:21:39.705006    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:21:39.716483    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:21:39.716496    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:21:39.749972    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:21:39.749980    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:21:39.754085    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:21:39.754091    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:21:39.786576    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:21:39.786588    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:21:39.800451    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:21:39.800460    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:21:39.814609    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:21:39.814619    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:21:39.839447    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:21:39.839457    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:21:39.851225    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:21:39.851234    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:21:39.862400    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:21:39.862408    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:21:39.873819    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:21:39.873833    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:21:39.889401    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:21:39.889413    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:21:39.903522    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:21:39.903533    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:21:42.423239    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:21:47.425546    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:21:47.425966    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:21:47.468477    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:21:47.468606    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:21:47.486863    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:21:47.486954    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:21:47.501488    4364 logs.go:276] 4 containers: [f03660e73de4 e1beb384d211 8c62957ec4e6 418ffa66d7e1]
	I0307 10:21:47.501558    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:21:47.513228    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:21:47.513292    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:21:47.524206    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:21:47.524275    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:21:47.535015    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:21:47.535070    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:21:47.544590    4364 logs.go:276] 0 containers: []
	W0307 10:21:47.544601    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:21:47.544653    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:21:47.555321    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:21:47.555336    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:21:47.555342    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:21:47.559858    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:21:47.559867    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:21:47.574072    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:21:47.574082    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:21:47.585645    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:21:47.585655    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:21:47.609082    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:21:47.609091    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:21:47.620419    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:21:47.620430    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:21:47.654458    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:21:47.654466    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:21:47.669032    4364 logs.go:123] Gathering logs for coredns [f03660e73de4] ...
	I0307 10:21:47.669041    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f03660e73de4"
	I0307 10:21:47.680200    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:21:47.680210    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:21:47.695259    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:21:47.695269    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:21:47.729421    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:21:47.729432    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:21:47.740971    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:21:47.740983    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:21:47.758346    4364 logs.go:123] Gathering logs for coredns [e1beb384d211] ...
	I0307 10:21:47.758354    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1beb384d211"
	I0307 10:21:47.768966    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:21:47.768978    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:21:47.780455    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:21:47.780463    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:21:50.294005    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:21:55.296609    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:21:55.297114    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:21:55.337808    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:21:55.337946    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:21:55.360646    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:21:55.360751    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:21:55.375877    4364 logs.go:276] 4 containers: [f03660e73de4 e1beb384d211 8c62957ec4e6 418ffa66d7e1]
	I0307 10:21:55.375953    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:21:55.388508    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:21:55.388580    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:21:55.399143    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:21:55.399216    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:21:55.410030    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:21:55.410112    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:21:55.420005    4364 logs.go:276] 0 containers: []
	W0307 10:21:55.420014    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:21:55.420061    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:21:55.430579    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:21:55.430593    4364 logs.go:123] Gathering logs for coredns [e1beb384d211] ...
	I0307 10:21:55.430598    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1beb384d211"
	I0307 10:21:55.442197    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:21:55.442205    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:21:55.459600    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:21:55.459609    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:21:55.471178    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:21:55.471188    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:21:55.483189    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:21:55.483200    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:21:55.497324    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:21:55.497334    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:21:55.501711    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:21:55.501717    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:21:55.535521    4364 logs.go:123] Gathering logs for coredns [f03660e73de4] ...
	I0307 10:21:55.535532    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f03660e73de4"
	I0307 10:21:55.547217    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:21:55.547230    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:21:55.558749    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:21:55.558759    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:21:55.577101    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:21:55.577112    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:21:55.602264    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:21:55.602272    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:21:55.636545    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:21:55.636554    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:21:55.648053    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:21:55.648062    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:21:55.659692    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:21:55.659705    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:21:58.175739    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:22:03.178190    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:22:03.178663    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:22:03.226646    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:22:03.226775    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:22:03.246698    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:22:03.246780    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:22:03.264130    4364 logs.go:276] 4 containers: [f03660e73de4 e1beb384d211 8c62957ec4e6 418ffa66d7e1]
	I0307 10:22:03.264199    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:22:03.275194    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:22:03.275257    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:22:03.285368    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:22:03.285440    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:22:03.295677    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:22:03.295750    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:22:03.306219    4364 logs.go:276] 0 containers: []
	W0307 10:22:03.306230    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:22:03.306286    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:22:03.316815    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:22:03.316832    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:22:03.316837    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:22:03.328178    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:22:03.328192    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:22:03.363010    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:22:03.363021    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:22:03.377011    4364 logs.go:123] Gathering logs for coredns [f03660e73de4] ...
	I0307 10:22:03.377022    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f03660e73de4"
	I0307 10:22:03.388612    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:22:03.388625    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:22:03.399989    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:22:03.399998    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:22:03.424153    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:22:03.424163    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:22:03.450782    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:22:03.450794    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:22:03.462703    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:22:03.462717    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:22:03.497686    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:22:03.497693    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:22:03.512342    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:22:03.512352    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:22:03.527270    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:22:03.527282    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:22:03.538595    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:22:03.538606    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:22:03.542701    4364 logs.go:123] Gathering logs for coredns [e1beb384d211] ...
	I0307 10:22:03.542707    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1beb384d211"
	I0307 10:22:03.553864    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:22:03.553878    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:22:06.068206    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:22:11.070352    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:22:11.070843    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:22:11.111655    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:22:11.111780    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:22:11.132493    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:22:11.132613    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:22:11.150168    4364 logs.go:276] 4 containers: [f03660e73de4 e1beb384d211 8c62957ec4e6 418ffa66d7e1]
	I0307 10:22:11.150247    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:22:11.162683    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:22:11.162756    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:22:11.173541    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:22:11.173611    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:22:11.183673    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:22:11.183757    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:22:11.193448    4364 logs.go:276] 0 containers: []
	W0307 10:22:11.193458    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:22:11.193527    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:22:11.203343    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:22:11.203357    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:22:11.203363    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:22:11.237465    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:22:11.237480    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:22:11.251903    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:22:11.251915    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:22:11.265944    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:22:11.265956    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:22:11.281202    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:22:11.281214    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:22:11.293092    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:22:11.293102    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:22:11.305415    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:22:11.305428    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:22:11.329796    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:22:11.329803    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:22:11.333641    4364 logs.go:123] Gathering logs for coredns [f03660e73de4] ...
	I0307 10:22:11.333649    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f03660e73de4"
	I0307 10:22:11.344627    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:22:11.344638    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:22:11.362339    4364 logs.go:123] Gathering logs for coredns [e1beb384d211] ...
	I0307 10:22:11.362352    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1beb384d211"
	I0307 10:22:11.373424    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:22:11.373438    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:22:11.388153    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:22:11.388165    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:22:11.421608    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:22:11.421618    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:22:11.433280    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:22:11.433293    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:22:13.947393    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:22:18.949879    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:22:18.950091    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:22:18.973230    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:22:18.973320    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:22:18.987091    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:22:18.987169    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:22:18.999845    4364 logs.go:276] 4 containers: [f03660e73de4 e1beb384d211 8c62957ec4e6 418ffa66d7e1]
	I0307 10:22:18.999913    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:22:19.010183    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:22:19.010241    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:22:19.020276    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:22:19.020343    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:22:19.030555    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:22:19.030626    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:22:19.040737    4364 logs.go:276] 0 containers: []
	W0307 10:22:19.040748    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:22:19.040806    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:22:19.051213    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:22:19.051230    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:22:19.051235    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:22:19.065836    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:22:19.065849    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:22:19.100250    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:22:19.100259    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:22:19.113949    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:22:19.113961    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:22:19.128829    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:22:19.128839    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:22:19.145954    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:22:19.145966    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:22:19.170374    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:22:19.170381    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:22:19.174687    4364 logs.go:123] Gathering logs for coredns [e1beb384d211] ...
	I0307 10:22:19.174695    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1beb384d211"
	I0307 10:22:19.186357    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:22:19.186370    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:22:19.198846    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:22:19.198858    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:22:19.210845    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:22:19.210860    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:22:19.229428    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:22:19.229436    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:22:19.262987    4364 logs.go:123] Gathering logs for coredns [f03660e73de4] ...
	I0307 10:22:19.262999    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f03660e73de4"
	I0307 10:22:19.275145    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:22:19.275156    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:22:19.286714    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:22:19.286723    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:22:21.803725    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:22:26.806054    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:22:26.806443    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:22:26.854981    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:22:26.855077    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:22:26.875383    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:22:26.875448    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:22:26.893174    4364 logs.go:276] 4 containers: [f03660e73de4 e1beb384d211 8c62957ec4e6 418ffa66d7e1]
	I0307 10:22:26.893247    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:22:26.904073    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:22:26.904135    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:22:26.914603    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:22:26.914676    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:22:26.925645    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:22:26.925716    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:22:26.936291    4364 logs.go:276] 0 containers: []
	W0307 10:22:26.936302    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:22:26.936350    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:22:26.946982    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:22:26.947001    4364 logs.go:123] Gathering logs for coredns [e1beb384d211] ...
	I0307 10:22:26.947006    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1beb384d211"
	I0307 10:22:26.958561    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:22:26.958572    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:22:26.970576    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:22:26.970587    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:22:26.985862    4364 logs.go:123] Gathering logs for coredns [f03660e73de4] ...
	I0307 10:22:26.985874    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f03660e73de4"
	I0307 10:22:26.997470    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:22:26.997481    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:22:27.015647    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:22:27.015661    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:22:27.020126    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:22:27.020133    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:22:27.035739    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:22:27.035749    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:22:27.047064    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:22:27.047078    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:22:27.081404    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:22:27.081413    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:22:27.094955    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:22:27.094964    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:22:27.106220    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:22:27.106232    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:22:27.123776    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:22:27.123787    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:22:27.148108    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:22:27.148119    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:22:27.159739    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:22:27.159750    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:22:29.696702    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:22:34.698814    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:22:34.699353    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:22:34.740766    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:22:34.740901    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:22:34.762541    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:22:34.762646    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:22:34.778369    4364 logs.go:276] 4 containers: [f03660e73de4 e1beb384d211 8c62957ec4e6 418ffa66d7e1]
	I0307 10:22:34.778450    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:22:34.795253    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:22:34.795327    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:22:34.805712    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:22:34.805786    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:22:34.816063    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:22:34.816133    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:22:34.827943    4364 logs.go:276] 0 containers: []
	W0307 10:22:34.827954    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:22:34.828009    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:22:34.842133    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:22:34.842147    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:22:34.842152    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:22:34.875851    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:22:34.875861    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:22:34.887491    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:22:34.887501    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:22:34.908611    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:22:34.908622    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:22:34.922677    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:22:34.922692    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:22:34.927260    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:22:34.927267    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:22:34.941294    4364 logs.go:123] Gathering logs for coredns [e1beb384d211] ...
	I0307 10:22:34.941304    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1beb384d211"
	I0307 10:22:34.952941    4364 logs.go:123] Gathering logs for coredns [f03660e73de4] ...
	I0307 10:22:34.952953    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f03660e73de4"
	I0307 10:22:34.964429    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:22:34.964439    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:22:34.981084    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:22:34.981095    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:22:34.992342    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:22:34.992352    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:22:35.009852    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:22:35.009866    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:22:35.034902    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:22:35.034914    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:22:35.069486    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:22:35.069497    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:22:35.083849    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:22:35.083861    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:22:37.598416    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:22:42.600448    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:22:42.600564    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:22:42.613880    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:22:42.613955    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:22:42.625362    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:22:42.625430    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:22:42.639828    4364 logs.go:276] 4 containers: [f03660e73de4 e1beb384d211 8c62957ec4e6 418ffa66d7e1]
	I0307 10:22:42.639891    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:22:42.650154    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:22:42.650219    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:22:42.660327    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:22:42.660384    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:22:42.670289    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:22:42.670355    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:22:42.680662    4364 logs.go:276] 0 containers: []
	W0307 10:22:42.680671    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:22:42.680718    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:22:42.690845    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:22:42.690862    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:22:42.690867    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:22:42.704981    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:22:42.704992    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:22:42.728328    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:22:42.728337    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:22:42.732314    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:22:42.732319    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:22:42.745966    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:22:42.745977    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:22:42.760220    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:22:42.760229    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:22:42.772245    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:22:42.772257    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:22:42.794530    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:22:42.794540    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:22:42.809927    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:22:42.809939    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:22:42.847648    4364 logs.go:123] Gathering logs for coredns [e1beb384d211] ...
	I0307 10:22:42.847659    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1beb384d211"
	I0307 10:22:42.861426    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:22:42.861438    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:22:42.872624    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:22:42.872636    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:22:42.905768    4364 logs.go:123] Gathering logs for coredns [f03660e73de4] ...
	I0307 10:22:42.905775    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f03660e73de4"
	I0307 10:22:42.916997    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:22:42.917010    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:22:42.928960    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:22:42.928969    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:22:45.442555    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:22:50.445233    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:22:50.445609    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:22:50.479644    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:22:50.479754    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:22:50.504791    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:22:50.504886    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:22:50.519052    4364 logs.go:276] 4 containers: [f03660e73de4 e1beb384d211 8c62957ec4e6 418ffa66d7e1]
	I0307 10:22:50.519125    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:22:50.536205    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:22:50.536271    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:22:50.546641    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:22:50.546710    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:22:50.557491    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:22:50.557557    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:22:50.575561    4364 logs.go:276] 0 containers: []
	W0307 10:22:50.575571    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:22:50.575622    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:22:50.586414    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:22:50.586433    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:22:50.586438    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:22:50.621255    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:22:50.621267    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:22:50.632923    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:22:50.632935    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:22:50.655846    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:22:50.655858    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:22:50.673454    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:22:50.673465    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:22:50.685461    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:22:50.685473    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:22:50.719691    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:22:50.719699    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:22:50.723847    4364 logs.go:123] Gathering logs for coredns [e1beb384d211] ...
	I0307 10:22:50.723857    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1beb384d211"
	I0307 10:22:50.735252    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:22:50.735262    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:22:50.746878    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:22:50.746888    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:22:50.758570    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:22:50.758583    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:22:50.772119    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:22:50.772127    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:22:50.783446    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:22:50.783458    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:22:50.806627    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:22:50.806634    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:22:50.820626    4364 logs.go:123] Gathering logs for coredns [f03660e73de4] ...
	I0307 10:22:50.820636    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f03660e73de4"
	I0307 10:22:53.334751    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:22:58.336884    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:22:58.337356    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:22:58.376669    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:22:58.376795    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:22:58.402206    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:22:58.402325    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:22:58.417572    4364 logs.go:276] 4 containers: [f03660e73de4 e1beb384d211 8c62957ec4e6 418ffa66d7e1]
	I0307 10:22:58.417656    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:22:58.429846    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:22:58.429916    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:22:58.440229    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:22:58.440296    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:22:58.450669    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:22:58.450745    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:22:58.462539    4364 logs.go:276] 0 containers: []
	W0307 10:22:58.462549    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:22:58.462607    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:22:58.473250    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:22:58.473266    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:22:58.473272    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:22:58.514952    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:22:58.514961    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:22:58.528986    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:22:58.528996    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:22:58.546379    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:22:58.546392    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:22:58.558598    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:22:58.558608    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:22:58.570009    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:22:58.570018    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:22:58.581828    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:22:58.581838    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:22:58.586568    4364 logs.go:123] Gathering logs for coredns [f03660e73de4] ...
	I0307 10:22:58.586578    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f03660e73de4"
	I0307 10:22:58.597818    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:22:58.597829    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:22:58.622359    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:22:58.622369    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:22:58.656731    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:22:58.656737    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:22:58.671370    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:22:58.671383    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:22:58.682635    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:22:58.682645    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:22:58.700603    4364 logs.go:123] Gathering logs for coredns [e1beb384d211] ...
	I0307 10:22:58.700612    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1beb384d211"
	I0307 10:22:58.712439    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:22:58.712453    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:23:01.224564    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:23:06.226907    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:23:06.226982    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:23:06.238391    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:23:06.238451    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:23:06.249397    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:23:06.249453    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:23:06.260543    4364 logs.go:276] 4 containers: [f03660e73de4 e1beb384d211 8c62957ec4e6 418ffa66d7e1]
	I0307 10:23:06.260598    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:23:06.272830    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:23:06.272885    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:23:06.284542    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:23:06.284599    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:23:06.300314    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:23:06.300366    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:23:06.312470    4364 logs.go:276] 0 containers: []
	W0307 10:23:06.312482    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:23:06.312542    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:23:06.325065    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:23:06.325080    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:23:06.325085    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:23:06.363393    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:23:06.363411    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:23:06.368176    4364 logs.go:123] Gathering logs for coredns [e1beb384d211] ...
	I0307 10:23:06.368188    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1beb384d211"
	I0307 10:23:06.381374    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:23:06.381387    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:23:06.398408    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:23:06.398422    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:23:06.411104    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:23:06.411111    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:23:06.449720    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:23:06.449731    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:23:06.463215    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:23:06.463228    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:23:06.478960    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:23:06.478971    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:23:06.492947    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:23:06.492955    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:23:06.516549    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:23:06.516560    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:23:06.530484    4364 logs.go:123] Gathering logs for coredns [f03660e73de4] ...
	I0307 10:23:06.530491    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f03660e73de4"
	I0307 10:23:06.541715    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:23:06.541724    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:23:06.553409    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:23:06.553421    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:23:06.565076    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:23:06.565088    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:23:09.088583    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:23:14.089194    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:23:14.089285    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:23:14.100381    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:23:14.100446    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:23:14.111288    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:23:14.111358    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:23:14.122361    4364 logs.go:276] 4 containers: [f03660e73de4 e1beb384d211 8c62957ec4e6 418ffa66d7e1]
	I0307 10:23:14.122435    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:23:14.132981    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:23:14.133041    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:23:14.144571    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:23:14.144622    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:23:14.155832    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:23:14.155879    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:23:14.166644    4364 logs.go:276] 0 containers: []
	W0307 10:23:14.166655    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:23:14.166706    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:23:14.178078    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:23:14.178096    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:23:14.178101    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:23:14.211182    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:23:14.211191    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:23:14.215366    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:23:14.215372    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:23:14.229592    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:23:14.229604    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:23:14.240877    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:23:14.240888    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:23:14.257863    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:23:14.257873    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:23:14.294021    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:23:14.294033    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:23:14.308172    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:23:14.308182    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:23:14.319903    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:23:14.319912    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:23:14.334523    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:23:14.334532    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:23:14.346047    4364 logs.go:123] Gathering logs for coredns [f03660e73de4] ...
	I0307 10:23:14.346056    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f03660e73de4"
	I0307 10:23:14.357112    4364 logs.go:123] Gathering logs for coredns [e1beb384d211] ...
	I0307 10:23:14.357122    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1beb384d211"
	I0307 10:23:14.368060    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:23:14.368072    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:23:14.380013    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:23:14.380022    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:23:14.403651    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:23:14.403658    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:23:16.916374    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:23:21.917699    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:23:21.918057    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0307 10:23:21.957882    4364 logs.go:276] 1 containers: [427a7ba97366]
	I0307 10:23:21.958015    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0307 10:23:21.979132    4364 logs.go:276] 1 containers: [636a1d7c9755]
	I0307 10:23:21.979236    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0307 10:23:21.995270    4364 logs.go:276] 4 containers: [f03660e73de4 e1beb384d211 8c62957ec4e6 418ffa66d7e1]
	I0307 10:23:21.995349    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0307 10:23:22.007937    4364 logs.go:276] 1 containers: [f8136f0d1e9e]
	I0307 10:23:22.008006    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0307 10:23:22.018481    4364 logs.go:276] 1 containers: [e401b386b35d]
	I0307 10:23:22.018548    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0307 10:23:22.028490    4364 logs.go:276] 1 containers: [d1da6e8a1ecf]
	I0307 10:23:22.028545    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0307 10:23:22.038668    4364 logs.go:276] 0 containers: []
	W0307 10:23:22.038678    4364 logs.go:278] No container was found matching "kindnet"
	I0307 10:23:22.038727    4364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0307 10:23:22.056557    4364 logs.go:276] 1 containers: [826953fd4638]
	I0307 10:23:22.056573    4364 logs.go:123] Gathering logs for coredns [e1beb384d211] ...
	I0307 10:23:22.056578    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1beb384d211"
	I0307 10:23:22.068226    4364 logs.go:123] Gathering logs for kube-proxy [e401b386b35d] ...
	I0307 10:23:22.068238    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e401b386b35d"
	I0307 10:23:22.079573    4364 logs.go:123] Gathering logs for Docker ...
	I0307 10:23:22.079581    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0307 10:23:22.102235    4364 logs.go:123] Gathering logs for kube-apiserver [427a7ba97366] ...
	I0307 10:23:22.102245    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 427a7ba97366"
	I0307 10:23:22.116039    4364 logs.go:123] Gathering logs for describe nodes ...
	I0307 10:23:22.116052    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 10:23:22.149746    4364 logs.go:123] Gathering logs for coredns [f03660e73de4] ...
	I0307 10:23:22.149757    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f03660e73de4"
	I0307 10:23:22.161028    4364 logs.go:123] Gathering logs for storage-provisioner [826953fd4638] ...
	I0307 10:23:22.161042    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 826953fd4638"
	I0307 10:23:22.172354    4364 logs.go:123] Gathering logs for container status ...
	I0307 10:23:22.172365    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 10:23:22.184412    4364 logs.go:123] Gathering logs for dmesg ...
	I0307 10:23:22.184422    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 10:23:22.188388    4364 logs.go:123] Gathering logs for etcd [636a1d7c9755] ...
	I0307 10:23:22.188394    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636a1d7c9755"
	I0307 10:23:22.202177    4364 logs.go:123] Gathering logs for kube-scheduler [f8136f0d1e9e] ...
	I0307 10:23:22.202185    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8136f0d1e9e"
	I0307 10:23:22.217917    4364 logs.go:123] Gathering logs for kubelet ...
	I0307 10:23:22.217926    4364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 10:23:22.252973    4364 logs.go:123] Gathering logs for coredns [418ffa66d7e1] ...
	I0307 10:23:22.252979    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418ffa66d7e1"
	I0307 10:23:22.264788    4364 logs.go:123] Gathering logs for kube-controller-manager [d1da6e8a1ecf] ...
	I0307 10:23:22.264803    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1da6e8a1ecf"
	I0307 10:23:22.281902    4364 logs.go:123] Gathering logs for coredns [8c62957ec4e6] ...
	I0307 10:23:22.281911    4364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c62957ec4e6"
	I0307 10:23:24.794656    4364 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0307 10:23:29.795289    4364 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 10:23:29.798566    4364 out.go:177] 
	W0307 10:23:29.802594    4364 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0307 10:23:29.802608    4364 out.go:239] * 
	* 
	W0307 10:23:29.803044    4364 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:23:29.813529    4364 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-853000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (580.56s)

                                                
                                    
x
+
TestPause/serial/Start (9.97s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-937000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-937000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.900931125s)

                                                
                                                
-- stdout --
	* [pause-937000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-937000" primary control-plane node in "pause-937000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-937000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-937000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-937000 -n pause-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-937000 -n pause-937000: exit status 7 (65.303416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-123000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-123000 --driver=qemu2 : exit status 80 (9.952446792s)

                                                
                                                
-- stdout --
	* [NoKubernetes-123000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-123000" primary control-plane node in "NoKubernetes-123000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-123000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-123000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-123000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-123000 -n NoKubernetes-123000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-123000 -n NoKubernetes-123000: exit status 7 (70.707375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-123000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-123000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-123000 --no-kubernetes --driver=qemu2 : exit status 80 (5.855297916s)

                                                
                                                
-- stdout --
	* [NoKubernetes-123000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-123000
	* Restarting existing qemu2 VM for "NoKubernetes-123000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-123000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-123000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-123000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-123000 -n NoKubernetes-123000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-123000 -n NoKubernetes-123000: exit status 7 (67.678458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-123000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-123000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-123000 --no-kubernetes --driver=qemu2 : exit status 80 (5.842310625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-123000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-123000
	* Restarting existing qemu2 VM for "NoKubernetes-123000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-123000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-123000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-123000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-123000 -n NoKubernetes-123000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-123000 -n NoKubernetes-123000: exit status 7 (48.8885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-123000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-123000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-123000 --driver=qemu2 : exit status 80 (5.881630917s)

                                                
                                                
-- stdout --
	* [NoKubernetes-123000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-123000
	* Restarting existing qemu2 VM for "NoKubernetes-123000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-123000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-123000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-123000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-123000 -n NoKubernetes-123000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-123000 -n NoKubernetes-123000: exit status 7 (66.410458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-123000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-819000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-819000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.758785s)

                                                
                                                
-- stdout --
	* [auto-819000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-819000" primary control-plane node in "auto-819000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-819000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:22:05.588533    4640 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:22:05.588696    4640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:22:05.588700    4640 out.go:304] Setting ErrFile to fd 2...
	I0307 10:22:05.588702    4640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:22:05.588819    4640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:22:05.589999    4640 out.go:298] Setting JSON to false
	I0307 10:22:05.606796    4640 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4897,"bootTime":1709830828,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:22:05.606854    4640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:22:05.612814    4640 out.go:177] * [auto-819000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:22:05.619721    4640 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:22:05.619780    4640 notify.go:220] Checking for updates...
	I0307 10:22:05.627723    4640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:22:05.630752    4640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:22:05.633827    4640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:22:05.636741    4640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:22:05.639768    4640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:22:05.643185    4640 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:22:05.643254    4640 config.go:182] Loaded profile config "stopped-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:22:05.643310    4640 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:22:05.647680    4640 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:22:05.654768    4640 start.go:297] selected driver: qemu2
	I0307 10:22:05.654774    4640 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:22:05.654780    4640 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:22:05.657201    4640 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:22:05.659769    4640 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:22:05.662926    4640 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:22:05.662972    4640 cni.go:84] Creating CNI manager for ""
	I0307 10:22:05.662981    4640 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:22:05.662985    4640 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 10:22:05.663017    4640 start.go:340] cluster config:
	{Name:auto-819000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:22:05.667524    4640 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:22:05.674767    4640 out.go:177] * Starting "auto-819000" primary control-plane node in "auto-819000" cluster
	I0307 10:22:05.678745    4640 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:22:05.678767    4640 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:22:05.678777    4640 cache.go:56] Caching tarball of preloaded images
	I0307 10:22:05.678835    4640 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:22:05.678841    4640 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:22:05.678903    4640 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/auto-819000/config.json ...
	I0307 10:22:05.678915    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/auto-819000/config.json: {Name:mk86192b986fe5d2e954b24b066886eafdf2d3b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:22:05.679204    4640 start.go:360] acquireMachinesLock for auto-819000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:22:05.679245    4640 start.go:364] duration metric: took 33.375µs to acquireMachinesLock for "auto-819000"
	I0307 10:22:05.679257    4640 start.go:93] Provisioning new machine with config: &{Name:auto-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:22:05.679286    4640 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:22:05.687745    4640 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 10:22:05.704290    4640 start.go:159] libmachine.API.Create for "auto-819000" (driver="qemu2")
	I0307 10:22:05.704316    4640 client.go:168] LocalClient.Create starting
	I0307 10:22:05.704379    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:22:05.704412    4640 main.go:141] libmachine: Decoding PEM data...
	I0307 10:22:05.704424    4640 main.go:141] libmachine: Parsing certificate...
	I0307 10:22:05.704471    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:22:05.704492    4640 main.go:141] libmachine: Decoding PEM data...
	I0307 10:22:05.704502    4640 main.go:141] libmachine: Parsing certificate...
	I0307 10:22:05.704875    4640 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:22:05.841405    4640 main.go:141] libmachine: Creating SSH key...
	I0307 10:22:05.922861    4640 main.go:141] libmachine: Creating Disk image...
	I0307 10:22:05.922872    4640 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:22:05.924061    4640 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/auto-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/auto-819000/disk.qcow2
	I0307 10:22:05.937855    4640 main.go:141] libmachine: STDOUT: 
	I0307 10:22:05.937880    4640 main.go:141] libmachine: STDERR: 
	I0307 10:22:05.937948    4640 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/auto-819000/disk.qcow2 +20000M
	I0307 10:22:05.949374    4640 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:22:05.949392    4640 main.go:141] libmachine: STDERR: 
	I0307 10:22:05.949417    4640 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/auto-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/auto-819000/disk.qcow2
	I0307 10:22:05.949423    4640 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:22:05.949451    4640 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/auto-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/auto-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/auto-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:af:46:b2:6d:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/auto-819000/disk.qcow2
	I0307 10:22:05.951407    4640 main.go:141] libmachine: STDOUT: 
	I0307 10:22:05.951425    4640 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:22:05.951446    4640 client.go:171] duration metric: took 247.132334ms to LocalClient.Create
	I0307 10:22:07.953540    4640 start.go:128] duration metric: took 2.274314791s to createHost
	I0307 10:22:07.953586    4640 start.go:83] releasing machines lock for "auto-819000", held for 2.274409709s
	W0307 10:22:07.953634    4640 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:22:07.964794    4640 out.go:177] * Deleting "auto-819000" in qemu2 ...
	W0307 10:22:07.982243    4640 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:22:07.982263    4640 start.go:728] Will try again in 5 seconds ...
	I0307 10:22:12.983599    4640 start.go:360] acquireMachinesLock for auto-819000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:22:12.984087    4640 start.go:364] duration metric: took 347.125µs to acquireMachinesLock for "auto-819000"
	I0307 10:22:12.984215    4640 start.go:93] Provisioning new machine with config: &{Name:auto-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:22:12.984477    4640 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:22:12.989124    4640 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 10:22:13.034773    4640 start.go:159] libmachine.API.Create for "auto-819000" (driver="qemu2")
	I0307 10:22:13.034827    4640 client.go:168] LocalClient.Create starting
	I0307 10:22:13.034943    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:22:13.035003    4640 main.go:141] libmachine: Decoding PEM data...
	I0307 10:22:13.035020    4640 main.go:141] libmachine: Parsing certificate...
	I0307 10:22:13.035082    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:22:13.035124    4640 main.go:141] libmachine: Decoding PEM data...
	I0307 10:22:13.035137    4640 main.go:141] libmachine: Parsing certificate...
	I0307 10:22:13.035682    4640 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:22:13.182910    4640 main.go:141] libmachine: Creating SSH key...
	I0307 10:22:13.242089    4640 main.go:141] libmachine: Creating Disk image...
	I0307 10:22:13.242095    4640 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:22:13.242281    4640 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/auto-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/auto-819000/disk.qcow2
	I0307 10:22:13.254880    4640 main.go:141] libmachine: STDOUT: 
	I0307 10:22:13.254906    4640 main.go:141] libmachine: STDERR: 
	I0307 10:22:13.254969    4640 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/auto-819000/disk.qcow2 +20000M
	I0307 10:22:13.265811    4640 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:22:13.265826    4640 main.go:141] libmachine: STDERR: 
	I0307 10:22:13.265836    4640 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/auto-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/auto-819000/disk.qcow2
	I0307 10:22:13.265840    4640 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:22:13.265871    4640 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/auto-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/auto-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/auto-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:38:16:63:bd:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/auto-819000/disk.qcow2
	I0307 10:22:13.267607    4640 main.go:141] libmachine: STDOUT: 
	I0307 10:22:13.267623    4640 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:22:13.267636    4640 client.go:171] duration metric: took 232.811917ms to LocalClient.Create
	I0307 10:22:15.269962    4640 start.go:128] duration metric: took 2.285522s to createHost
	I0307 10:22:15.270031    4640 start.go:83] releasing machines lock for "auto-819000", held for 2.285998291s
	W0307 10:22:15.270369    4640 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:22:15.286025    4640 out.go:177] 
	W0307 10:22:15.290103    4640 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:22:15.290146    4640 out.go:239] * 
	* 
	W0307 10:22:15.293038    4640 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:22:15.302884    4640 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-819000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-819000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.809957458s)

                                                
                                                
-- stdout --
	* [kindnet-819000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-819000" primary control-plane node in "kindnet-819000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-819000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:22:17.596247    4750 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:22:17.596370    4750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:22:17.596373    4750 out.go:304] Setting ErrFile to fd 2...
	I0307 10:22:17.596376    4750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:22:17.596519    4750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:22:17.597607    4750 out.go:298] Setting JSON to false
	I0307 10:22:17.613778    4750 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4909,"bootTime":1709830828,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:22:17.613842    4750 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:22:17.619070    4750 out.go:177] * [kindnet-819000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:22:17.627087    4750 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:22:17.627131    4750 notify.go:220] Checking for updates...
	I0307 10:22:17.632221    4750 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:22:17.635090    4750 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:22:17.638137    4750 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:22:17.641079    4750 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:22:17.644050    4750 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:22:17.647452    4750 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:22:17.647524    4750 config.go:182] Loaded profile config "stopped-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:22:17.647570    4750 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:22:17.652108    4750 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:22:17.659086    4750 start.go:297] selected driver: qemu2
	I0307 10:22:17.659092    4750 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:22:17.659098    4750 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:22:17.661314    4750 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:22:17.664073    4750 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:22:17.667167    4750 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:22:17.667206    4750 cni.go:84] Creating CNI manager for "kindnet"
	I0307 10:22:17.667210    4750 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 10:22:17.667249    4750 start.go:340] cluster config:
	{Name:kindnet-819000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:22:17.671693    4750 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:22:17.678899    4750 out.go:177] * Starting "kindnet-819000" primary control-plane node in "kindnet-819000" cluster
	I0307 10:22:17.683106    4750 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:22:17.683120    4750 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:22:17.683132    4750 cache.go:56] Caching tarball of preloaded images
	I0307 10:22:17.683203    4750 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:22:17.683209    4750 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:22:17.683281    4750 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/kindnet-819000/config.json ...
	I0307 10:22:17.683293    4750 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/kindnet-819000/config.json: {Name:mkdcf020309d47d51325a03def2c69b35f5343cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:22:17.683521    4750 start.go:360] acquireMachinesLock for kindnet-819000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:22:17.683552    4750 start.go:364] duration metric: took 26.084µs to acquireMachinesLock for "kindnet-819000"
	I0307 10:22:17.683563    4750 start.go:93] Provisioning new machine with config: &{Name:kindnet-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:22:17.683593    4750 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:22:17.690053    4750 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 10:22:17.706063    4750 start.go:159] libmachine.API.Create for "kindnet-819000" (driver="qemu2")
	I0307 10:22:17.706089    4750 client.go:168] LocalClient.Create starting
	I0307 10:22:17.706143    4750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:22:17.706173    4750 main.go:141] libmachine: Decoding PEM data...
	I0307 10:22:17.706182    4750 main.go:141] libmachine: Parsing certificate...
	I0307 10:22:17.706226    4750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:22:17.706246    4750 main.go:141] libmachine: Decoding PEM data...
	I0307 10:22:17.706253    4750 main.go:141] libmachine: Parsing certificate...
	I0307 10:22:17.706644    4750 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:22:17.841850    4750 main.go:141] libmachine: Creating SSH key...
	I0307 10:22:17.890508    4750 main.go:141] libmachine: Creating Disk image...
	I0307 10:22:17.890514    4750 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:22:17.890681    4750 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kindnet-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kindnet-819000/disk.qcow2
	I0307 10:22:17.902859    4750 main.go:141] libmachine: STDOUT: 
	I0307 10:22:17.902878    4750 main.go:141] libmachine: STDERR: 
	I0307 10:22:17.902953    4750 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kindnet-819000/disk.qcow2 +20000M
	I0307 10:22:17.914590    4750 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:22:17.914619    4750 main.go:141] libmachine: STDERR: 
	I0307 10:22:17.914639    4750 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kindnet-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kindnet-819000/disk.qcow2
	I0307 10:22:17.914645    4750 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:22:17.914676    4750 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kindnet-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kindnet-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kindnet-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:17:03:74:d7:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kindnet-819000/disk.qcow2
	I0307 10:22:17.916662    4750 main.go:141] libmachine: STDOUT: 
	I0307 10:22:17.916680    4750 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:22:17.916698    4750 client.go:171] duration metric: took 210.611125ms to LocalClient.Create
	I0307 10:22:19.918986    4750 start.go:128] duration metric: took 2.235436208s to createHost
	I0307 10:22:19.919090    4750 start.go:83] releasing machines lock for "kindnet-819000", held for 2.235602209s
	W0307 10:22:19.919150    4750 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:22:19.934487    4750 out.go:177] * Deleting "kindnet-819000" in qemu2 ...
	W0307 10:22:19.958601    4750 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:22:19.958642    4750 start.go:728] Will try again in 5 seconds ...
	I0307 10:22:24.959663    4750 start.go:360] acquireMachinesLock for kindnet-819000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:22:24.960306    4750 start.go:364] duration metric: took 522.458µs to acquireMachinesLock for "kindnet-819000"
	I0307 10:22:24.960467    4750 start.go:93] Provisioning new machine with config: &{Name:kindnet-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:22:24.960754    4750 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:22:24.966464    4750 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 10:22:25.016842    4750 start.go:159] libmachine.API.Create for "kindnet-819000" (driver="qemu2")
	I0307 10:22:25.016894    4750 client.go:168] LocalClient.Create starting
	I0307 10:22:25.017014    4750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:22:25.017081    4750 main.go:141] libmachine: Decoding PEM data...
	I0307 10:22:25.017099    4750 main.go:141] libmachine: Parsing certificate...
	I0307 10:22:25.017163    4750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:22:25.017205    4750 main.go:141] libmachine: Decoding PEM data...
	I0307 10:22:25.017218    4750 main.go:141] libmachine: Parsing certificate...
	I0307 10:22:25.017767    4750 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:22:25.164859    4750 main.go:141] libmachine: Creating SSH key...
	I0307 10:22:25.313253    4750 main.go:141] libmachine: Creating Disk image...
	I0307 10:22:25.313260    4750 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:22:25.313468    4750 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kindnet-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kindnet-819000/disk.qcow2
	I0307 10:22:25.326246    4750 main.go:141] libmachine: STDOUT: 
	I0307 10:22:25.326270    4750 main.go:141] libmachine: STDERR: 
	I0307 10:22:25.326324    4750 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kindnet-819000/disk.qcow2 +20000M
	I0307 10:22:25.337249    4750 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:22:25.337266    4750 main.go:141] libmachine: STDERR: 
	I0307 10:22:25.337279    4750 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kindnet-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kindnet-819000/disk.qcow2
	I0307 10:22:25.337283    4750 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:22:25.337310    4750 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kindnet-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kindnet-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kindnet-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:d0:26:73:4e:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kindnet-819000/disk.qcow2
	I0307 10:22:25.339043    4750 main.go:141] libmachine: STDOUT: 
	I0307 10:22:25.339061    4750 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:22:25.339074    4750 client.go:171] duration metric: took 322.182875ms to LocalClient.Create
	I0307 10:22:27.341103    4750 start.go:128] duration metric: took 2.380398667s to createHost
	I0307 10:22:27.341148    4750 start.go:83] releasing machines lock for "kindnet-819000", held for 2.380885833s
	W0307 10:22:27.341287    4750 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:22:27.350610    4750 out.go:177] 
	W0307 10:22:27.354499    4750 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:22:27.354515    4750 out.go:239] * 
	* 
	W0307 10:22:27.355376    4750 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:22:27.366372    4750 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-819000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-819000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.870210375s)

                                                
                                                
-- stdout --
	* [calico-819000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-819000" primary control-plane node in "calico-819000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-819000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:22:29.683315    4864 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:22:29.683570    4864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:22:29.683583    4864 out.go:304] Setting ErrFile to fd 2...
	I0307 10:22:29.683590    4864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:22:29.683943    4864 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:22:29.685251    4864 out.go:298] Setting JSON to false
	I0307 10:22:29.701777    4864 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4921,"bootTime":1709830828,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:22:29.701844    4864 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:22:29.707315    4864 out.go:177] * [calico-819000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:22:29.715278    4864 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:22:29.719083    4864 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:22:29.715310    4864 notify.go:220] Checking for updates...
	I0307 10:22:29.725263    4864 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:22:29.728311    4864 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:22:29.731296    4864 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:22:29.734275    4864 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:22:29.737626    4864 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:22:29.737696    4864 config.go:182] Loaded profile config "stopped-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:22:29.737749    4864 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:22:29.742244    4864 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:22:29.749210    4864 start.go:297] selected driver: qemu2
	I0307 10:22:29.749216    4864 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:22:29.749221    4864 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:22:29.751476    4864 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:22:29.754190    4864 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:22:29.757328    4864 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:22:29.757362    4864 cni.go:84] Creating CNI manager for "calico"
	I0307 10:22:29.757367    4864 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0307 10:22:29.757403    4864 start.go:340] cluster config:
	{Name:calico-819000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:22:29.762113    4864 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:22:29.767209    4864 out.go:177] * Starting "calico-819000" primary control-plane node in "calico-819000" cluster
	I0307 10:22:29.771227    4864 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:22:29.771241    4864 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:22:29.771252    4864 cache.go:56] Caching tarball of preloaded images
	I0307 10:22:29.771304    4864 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:22:29.771310    4864 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:22:29.771373    4864 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/calico-819000/config.json ...
	I0307 10:22:29.771388    4864 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/calico-819000/config.json: {Name:mk87d40e100adfe4e33fd2ba922650ba7cc54a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:22:29.771687    4864 start.go:360] acquireMachinesLock for calico-819000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:22:29.771720    4864 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "calico-819000"
	I0307 10:22:29.771731    4864 start.go:93] Provisioning new machine with config: &{Name:calico-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:22:29.771768    4864 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:22:29.775254    4864 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 10:22:29.792495    4864 start.go:159] libmachine.API.Create for "calico-819000" (driver="qemu2")
	I0307 10:22:29.792520    4864 client.go:168] LocalClient.Create starting
	I0307 10:22:29.792582    4864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:22:29.792614    4864 main.go:141] libmachine: Decoding PEM data...
	I0307 10:22:29.792628    4864 main.go:141] libmachine: Parsing certificate...
	I0307 10:22:29.792671    4864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:22:29.792697    4864 main.go:141] libmachine: Decoding PEM data...
	I0307 10:22:29.792711    4864 main.go:141] libmachine: Parsing certificate...
	I0307 10:22:29.793167    4864 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:22:29.930982    4864 main.go:141] libmachine: Creating SSH key...
	I0307 10:22:30.022687    4864 main.go:141] libmachine: Creating Disk image...
	I0307 10:22:30.022698    4864 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:22:30.022896    4864 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/calico-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/calico-819000/disk.qcow2
	I0307 10:22:30.037125    4864 main.go:141] libmachine: STDOUT: 
	I0307 10:22:30.037153    4864 main.go:141] libmachine: STDERR: 
	I0307 10:22:30.037233    4864 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/calico-819000/disk.qcow2 +20000M
	I0307 10:22:30.049058    4864 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:22:30.049087    4864 main.go:141] libmachine: STDERR: 
	I0307 10:22:30.049107    4864 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/calico-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/calico-819000/disk.qcow2
	I0307 10:22:30.049111    4864 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:22:30.049139    4864 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/calico-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/calico-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/calico-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:c1:29:b1:42:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/calico-819000/disk.qcow2
	I0307 10:22:30.051052    4864 main.go:141] libmachine: STDOUT: 
	I0307 10:22:30.051068    4864 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:22:30.051088    4864 client.go:171] duration metric: took 258.571916ms to LocalClient.Create
	I0307 10:22:32.052225    4864 start.go:128] duration metric: took 2.280511209s to createHost
	I0307 10:22:32.052267    4864 start.go:83] releasing machines lock for "calico-819000", held for 2.280615542s
	W0307 10:22:32.052310    4864 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:22:32.062802    4864 out.go:177] * Deleting "calico-819000" in qemu2 ...
	W0307 10:22:32.080548    4864 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:22:32.080569    4864 start.go:728] Will try again in 5 seconds ...
	I0307 10:22:37.082485    4864 start.go:360] acquireMachinesLock for calico-819000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:22:37.082666    4864 start.go:364] duration metric: took 151µs to acquireMachinesLock for "calico-819000"
	I0307 10:22:37.082741    4864 start.go:93] Provisioning new machine with config: &{Name:calico-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:22:37.082833    4864 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:22:37.086400    4864 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 10:22:37.107560    4864 start.go:159] libmachine.API.Create for "calico-819000" (driver="qemu2")
	I0307 10:22:37.107596    4864 client.go:168] LocalClient.Create starting
	I0307 10:22:37.107704    4864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:22:37.107750    4864 main.go:141] libmachine: Decoding PEM data...
	I0307 10:22:37.107761    4864 main.go:141] libmachine: Parsing certificate...
	I0307 10:22:37.107799    4864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:22:37.107828    4864 main.go:141] libmachine: Decoding PEM data...
	I0307 10:22:37.107836    4864 main.go:141] libmachine: Parsing certificate...
	I0307 10:22:37.108162    4864 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:22:37.247997    4864 main.go:141] libmachine: Creating SSH key...
	I0307 10:22:37.455283    4864 main.go:141] libmachine: Creating Disk image...
	I0307 10:22:37.455295    4864 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:22:37.455472    4864 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/calico-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/calico-819000/disk.qcow2
	I0307 10:22:37.468316    4864 main.go:141] libmachine: STDOUT: 
	I0307 10:22:37.468340    4864 main.go:141] libmachine: STDERR: 
	I0307 10:22:37.468407    4864 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/calico-819000/disk.qcow2 +20000M
	I0307 10:22:37.479687    4864 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:22:37.479712    4864 main.go:141] libmachine: STDERR: 
	I0307 10:22:37.479733    4864 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/calico-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/calico-819000/disk.qcow2
	I0307 10:22:37.479740    4864 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:22:37.479773    4864 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/calico-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/calico-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/calico-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:62:c3:be:9b:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/calico-819000/disk.qcow2
	I0307 10:22:37.481684    4864 main.go:141] libmachine: STDOUT: 
	I0307 10:22:37.481701    4864 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:22:37.481714    4864 client.go:171] duration metric: took 374.125959ms to LocalClient.Create
	I0307 10:22:39.483960    4864 start.go:128] duration metric: took 2.401152125s to createHost
	I0307 10:22:39.484041    4864 start.go:83] releasing machines lock for "calico-819000", held for 2.401441417s
	W0307 10:22:39.484340    4864 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:22:39.496883    4864 out.go:177] 
	W0307 10:22:39.500946    4864 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:22:39.500984    4864 out.go:239] * 
	* 
	W0307 10:22:39.502993    4864 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:22:39.510878    4864 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-819000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-819000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.821446166s)

                                                
                                                
-- stdout --
	* [custom-flannel-819000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-819000" primary control-plane node in "custom-flannel-819000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-819000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:22:42.038506    4985 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:22:42.038656    4985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:22:42.038659    4985 out.go:304] Setting ErrFile to fd 2...
	I0307 10:22:42.038662    4985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:22:42.038803    4985 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:22:42.039944    4985 out.go:298] Setting JSON to false
	I0307 10:22:42.056783    4985 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4934,"bootTime":1709830828,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:22:42.056870    4985 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:22:42.063604    4985 out.go:177] * [custom-flannel-819000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:22:42.071558    4985 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:22:42.071620    4985 notify.go:220] Checking for updates...
	I0307 10:22:42.076593    4985 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:22:42.079478    4985 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:22:42.082547    4985 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:22:42.085568    4985 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:22:42.086951    4985 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:22:42.090884    4985 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:22:42.090968    4985 config.go:182] Loaded profile config "stopped-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:22:42.091016    4985 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:22:42.097608    4985 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:22:42.105579    4985 start.go:297] selected driver: qemu2
	I0307 10:22:42.105585    4985 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:22:42.105591    4985 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:22:42.108100    4985 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:22:42.109397    4985 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:22:42.112624    4985 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:22:42.112672    4985 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0307 10:22:42.112931    4985 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0307 10:22:42.112972    4985 start.go:340] cluster config:
	{Name:custom-flannel-819000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:22:42.117974    4985 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:22:42.121591    4985 out.go:177] * Starting "custom-flannel-819000" primary control-plane node in "custom-flannel-819000" cluster
	I0307 10:22:42.129530    4985 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:22:42.129581    4985 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:22:42.129597    4985 cache.go:56] Caching tarball of preloaded images
	I0307 10:22:42.129682    4985 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:22:42.129688    4985 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:22:42.129769    4985 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/custom-flannel-819000/config.json ...
	I0307 10:22:42.129782    4985 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/custom-flannel-819000/config.json: {Name:mk152e8c70b3ff825fbb43d3aa0d6f0e5cad6a14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:22:42.130019    4985 start.go:360] acquireMachinesLock for custom-flannel-819000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:22:42.130054    4985 start.go:364] duration metric: took 25.5µs to acquireMachinesLock for "custom-flannel-819000"
	I0307 10:22:42.130065    4985 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:22:42.130098    4985 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:22:42.134597    4985 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 10:22:42.150084    4985 start.go:159] libmachine.API.Create for "custom-flannel-819000" (driver="qemu2")
	I0307 10:22:42.150107    4985 client.go:168] LocalClient.Create starting
	I0307 10:22:42.150169    4985 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:22:42.150199    4985 main.go:141] libmachine: Decoding PEM data...
	I0307 10:22:42.150212    4985 main.go:141] libmachine: Parsing certificate...
	I0307 10:22:42.150256    4985 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:22:42.150278    4985 main.go:141] libmachine: Decoding PEM data...
	I0307 10:22:42.150282    4985 main.go:141] libmachine: Parsing certificate...
	I0307 10:22:42.150685    4985 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:22:42.296023    4985 main.go:141] libmachine: Creating SSH key...
	I0307 10:22:42.461052    4985 main.go:141] libmachine: Creating Disk image...
	I0307 10:22:42.461066    4985 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:22:42.461274    4985 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/custom-flannel-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/custom-flannel-819000/disk.qcow2
	I0307 10:22:42.474280    4985 main.go:141] libmachine: STDOUT: 
	I0307 10:22:42.474321    4985 main.go:141] libmachine: STDERR: 
	I0307 10:22:42.474391    4985 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/custom-flannel-819000/disk.qcow2 +20000M
	I0307 10:22:42.486059    4985 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:22:42.486079    4985 main.go:141] libmachine: STDERR: 
	I0307 10:22:42.486109    4985 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/custom-flannel-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/custom-flannel-819000/disk.qcow2
	I0307 10:22:42.486113    4985 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:22:42.486162    4985 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/custom-flannel-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/custom-flannel-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/custom-flannel-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:5b:29:fe:2b:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/custom-flannel-819000/disk.qcow2
	I0307 10:22:42.488223    4985 main.go:141] libmachine: STDOUT: 
	I0307 10:22:42.488258    4985 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:22:42.488287    4985 client.go:171] duration metric: took 338.1865ms to LocalClient.Create
	I0307 10:22:44.490446    4985 start.go:128] duration metric: took 2.360403875s to createHost
	I0307 10:22:44.490548    4985 start.go:83] releasing machines lock for "custom-flannel-819000", held for 2.360563458s
	W0307 10:22:44.490601    4985 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:22:44.504806    4985 out.go:177] * Deleting "custom-flannel-819000" in qemu2 ...
	W0307 10:22:44.523377    4985 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:22:44.523397    4985 start.go:728] Will try again in 5 seconds ...
	I0307 10:22:49.525416    4985 start.go:360] acquireMachinesLock for custom-flannel-819000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:22:49.525968    4985 start.go:364] duration metric: took 369.75µs to acquireMachinesLock for "custom-flannel-819000"
	I0307 10:22:49.526159    4985 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:22:49.526404    4985 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:22:49.536013    4985 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 10:22:49.582052    4985 start.go:159] libmachine.API.Create for "custom-flannel-819000" (driver="qemu2")
	I0307 10:22:49.582111    4985 client.go:168] LocalClient.Create starting
	I0307 10:22:49.582237    4985 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:22:49.582301    4985 main.go:141] libmachine: Decoding PEM data...
	I0307 10:22:49.582318    4985 main.go:141] libmachine: Parsing certificate...
	I0307 10:22:49.582381    4985 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:22:49.582423    4985 main.go:141] libmachine: Decoding PEM data...
	I0307 10:22:49.582433    4985 main.go:141] libmachine: Parsing certificate...
	I0307 10:22:49.582930    4985 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:22:49.729357    4985 main.go:141] libmachine: Creating SSH key...
	I0307 10:22:49.764431    4985 main.go:141] libmachine: Creating Disk image...
	I0307 10:22:49.764436    4985 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:22:49.764619    4985 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/custom-flannel-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/custom-flannel-819000/disk.qcow2
	I0307 10:22:49.777088    4985 main.go:141] libmachine: STDOUT: 
	I0307 10:22:49.777112    4985 main.go:141] libmachine: STDERR: 
	I0307 10:22:49.777174    4985 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/custom-flannel-819000/disk.qcow2 +20000M
	I0307 10:22:49.788018    4985 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:22:49.788034    4985 main.go:141] libmachine: STDERR: 
	I0307 10:22:49.788051    4985 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/custom-flannel-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/custom-flannel-819000/disk.qcow2
	I0307 10:22:49.788056    4985 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:22:49.788085    4985 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/custom-flannel-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/custom-flannel-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/custom-flannel-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:8b:67:c1:ee:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/custom-flannel-819000/disk.qcow2
	I0307 10:22:49.789811    4985 main.go:141] libmachine: STDOUT: 
	I0307 10:22:49.789827    4985 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:22:49.789844    4985 client.go:171] duration metric: took 207.7335ms to LocalClient.Create
	I0307 10:22:51.792074    4985 start.go:128] duration metric: took 2.265692291s to createHost
	I0307 10:22:51.792162    4985 start.go:83] releasing machines lock for "custom-flannel-819000", held for 2.266209792s
	W0307 10:22:51.792486    4985 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:22:51.803055    4985 out.go:177] 
	W0307 10:22:51.806118    4985 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:22:51.806161    4985 out.go:239] * 
	* 
	W0307 10:22:51.808515    4985 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:22:51.816082    4985 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-819000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-819000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.889354708s)

                                                
                                                
-- stdout --
	* [false-819000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-819000" primary control-plane node in "false-819000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-819000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:22:54.310264    5105 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:22:54.310430    5105 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:22:54.310433    5105 out.go:304] Setting ErrFile to fd 2...
	I0307 10:22:54.310435    5105 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:22:54.310553    5105 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:22:54.311703    5105 out.go:298] Setting JSON to false
	I0307 10:22:54.328109    5105 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4946,"bootTime":1709830828,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:22:54.328164    5105 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:22:54.333250    5105 out.go:177] * [false-819000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:22:54.341064    5105 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:22:54.341101    5105 notify.go:220] Checking for updates...
	I0307 10:22:54.348112    5105 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:22:54.349551    5105 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:22:54.352148    5105 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:22:54.355177    5105 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:22:54.358228    5105 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:22:54.361491    5105 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:22:54.361563    5105 config.go:182] Loaded profile config "stopped-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:22:54.361620    5105 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:22:54.366135    5105 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:22:54.373158    5105 start.go:297] selected driver: qemu2
	I0307 10:22:54.373163    5105 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:22:54.373168    5105 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:22:54.375476    5105 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:22:54.379154    5105 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:22:54.382302    5105 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:22:54.382361    5105 cni.go:84] Creating CNI manager for "false"
	I0307 10:22:54.382382    5105 start.go:340] cluster config:
	{Name:false-819000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:false-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:22:54.386864    5105 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:22:54.394148    5105 out.go:177] * Starting "false-819000" primary control-plane node in "false-819000" cluster
	I0307 10:22:54.397134    5105 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:22:54.397148    5105 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:22:54.397159    5105 cache.go:56] Caching tarball of preloaded images
	I0307 10:22:54.397212    5105 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:22:54.397217    5105 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:22:54.397272    5105 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/false-819000/config.json ...
	I0307 10:22:54.397283    5105 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/false-819000/config.json: {Name:mkfb544d11161e934afbbb1468014b0e5df1dfa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:22:54.397499    5105 start.go:360] acquireMachinesLock for false-819000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:22:54.397548    5105 start.go:364] duration metric: took 42.583µs to acquireMachinesLock for "false-819000"
	I0307 10:22:54.397559    5105 start.go:93] Provisioning new machine with config: &{Name:false-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:22:54.397595    5105 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:22:54.404998    5105 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 10:22:54.422355    5105 start.go:159] libmachine.API.Create for "false-819000" (driver="qemu2")
	I0307 10:22:54.422390    5105 client.go:168] LocalClient.Create starting
	I0307 10:22:54.422478    5105 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:22:54.422508    5105 main.go:141] libmachine: Decoding PEM data...
	I0307 10:22:54.422519    5105 main.go:141] libmachine: Parsing certificate...
	I0307 10:22:54.422568    5105 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:22:54.422591    5105 main.go:141] libmachine: Decoding PEM data...
	I0307 10:22:54.422603    5105 main.go:141] libmachine: Parsing certificate...
	I0307 10:22:54.422962    5105 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:22:54.560463    5105 main.go:141] libmachine: Creating SSH key...
	I0307 10:22:54.669059    5105 main.go:141] libmachine: Creating Disk image...
	I0307 10:22:54.669067    5105 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:22:54.669236    5105 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/false-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/false-819000/disk.qcow2
	I0307 10:22:54.682045    5105 main.go:141] libmachine: STDOUT: 
	I0307 10:22:54.682065    5105 main.go:141] libmachine: STDERR: 
	I0307 10:22:54.682132    5105 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/false-819000/disk.qcow2 +20000M
	I0307 10:22:54.693121    5105 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:22:54.693137    5105 main.go:141] libmachine: STDERR: 
	I0307 10:22:54.693154    5105 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/false-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/false-819000/disk.qcow2
	I0307 10:22:54.693160    5105 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:22:54.693207    5105 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/false-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/false-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/false-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:cf:e8:89:6a:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/false-819000/disk.qcow2
	I0307 10:22:54.694840    5105 main.go:141] libmachine: STDOUT: 
	I0307 10:22:54.694853    5105 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:22:54.694875    5105 client.go:171] duration metric: took 272.489416ms to LocalClient.Create
	I0307 10:22:56.697068    5105 start.go:128] duration metric: took 2.299511667s to createHost
	I0307 10:22:56.697169    5105 start.go:83] releasing machines lock for "false-819000", held for 2.299689167s
	W0307 10:22:56.697213    5105 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:22:56.702967    5105 out.go:177] * Deleting "false-819000" in qemu2 ...
	W0307 10:22:56.724679    5105 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:22:56.724704    5105 start.go:728] Will try again in 5 seconds ...
	I0307 10:23:01.726929    5105 start.go:360] acquireMachinesLock for false-819000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:23:01.727526    5105 start.go:364] duration metric: took 475.667µs to acquireMachinesLock for "false-819000"
	I0307 10:23:01.727610    5105 start.go:93] Provisioning new machine with config: &{Name:false-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:23:01.727904    5105 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:23:01.737512    5105 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 10:23:01.784353    5105 start.go:159] libmachine.API.Create for "false-819000" (driver="qemu2")
	I0307 10:23:01.784410    5105 client.go:168] LocalClient.Create starting
	I0307 10:23:01.784552    5105 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:23:01.784617    5105 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:01.784633    5105 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:01.784705    5105 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:23:01.784748    5105 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:01.784764    5105 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:01.785316    5105 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:23:01.933691    5105 main.go:141] libmachine: Creating SSH key...
	I0307 10:23:02.102725    5105 main.go:141] libmachine: Creating Disk image...
	I0307 10:23:02.102733    5105 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:23:02.102914    5105 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/false-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/false-819000/disk.qcow2
	I0307 10:23:02.115280    5105 main.go:141] libmachine: STDOUT: 
	I0307 10:23:02.115301    5105 main.go:141] libmachine: STDERR: 
	I0307 10:23:02.115360    5105 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/false-819000/disk.qcow2 +20000M
	I0307 10:23:02.126072    5105 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:23:02.126093    5105 main.go:141] libmachine: STDERR: 
	I0307 10:23:02.126105    5105 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/false-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/false-819000/disk.qcow2
	I0307 10:23:02.126119    5105 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:23:02.126171    5105 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/false-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/false-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/false-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:62:1d:46:e7:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/false-819000/disk.qcow2
	I0307 10:23:02.128009    5105 main.go:141] libmachine: STDOUT: 
	I0307 10:23:02.128034    5105 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:23:02.128048    5105 client.go:171] duration metric: took 343.643708ms to LocalClient.Create
	I0307 10:23:04.130166    5105 start.go:128] duration metric: took 2.402304667s to createHost
	I0307 10:23:04.130257    5105 start.go:83] releasing machines lock for "false-819000", held for 2.402784708s
	W0307 10:23:04.130610    5105 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:23:04.141069    5105 out.go:177] 
	W0307 10:23:04.144237    5105 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:23:04.144278    5105 out.go:239] * 
	* 
	W0307 10:23:04.146137    5105 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:23:04.157298    5105 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-819000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-819000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.7738305s)

                                                
                                                
-- stdout --
	* [enable-default-cni-819000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-819000" primary control-plane node in "enable-default-cni-819000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-819000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:23:06.472867    5216 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:23:06.473016    5216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:23:06.473020    5216 out.go:304] Setting ErrFile to fd 2...
	I0307 10:23:06.473022    5216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:23:06.473143    5216 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:23:06.474355    5216 out.go:298] Setting JSON to false
	I0307 10:23:06.492451    5216 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4958,"bootTime":1709830828,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:23:06.492552    5216 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:23:06.497579    5216 out.go:177] * [enable-default-cni-819000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:23:06.505652    5216 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:23:06.510635    5216 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:23:06.505674    5216 notify.go:220] Checking for updates...
	I0307 10:23:06.516600    5216 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:23:06.519539    5216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:23:06.522638    5216 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:23:06.525662    5216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:23:06.529026    5216 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:23:06.529093    5216 config.go:182] Loaded profile config "stopped-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:23:06.529143    5216 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:23:06.533573    5216 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:23:06.540556    5216 start.go:297] selected driver: qemu2
	I0307 10:23:06.540567    5216 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:23:06.540574    5216 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:23:06.543199    5216 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:23:06.546588    5216 out.go:177] * Automatically selected the socket_vmnet network
	E0307 10:23:06.549670    5216 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0307 10:23:06.549689    5216 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:23:06.549741    5216 cni.go:84] Creating CNI manager for "bridge"
	I0307 10:23:06.549750    5216 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 10:23:06.549787    5216 start.go:340] cluster config:
	{Name:enable-default-cni-819000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:23:06.554667    5216 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:23:06.561565    5216 out.go:177] * Starting "enable-default-cni-819000" primary control-plane node in "enable-default-cni-819000" cluster
	I0307 10:23:06.565639    5216 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:23:06.565671    5216 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:23:06.565685    5216 cache.go:56] Caching tarball of preloaded images
	I0307 10:23:06.565763    5216 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:23:06.565770    5216 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:23:06.565841    5216 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/enable-default-cni-819000/config.json ...
	I0307 10:23:06.565853    5216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/enable-default-cni-819000/config.json: {Name:mkf2d82a0e10f2473e940b8fa71cf52b19a600a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:23:06.566168    5216 start.go:360] acquireMachinesLock for enable-default-cni-819000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:23:06.566200    5216 start.go:364] duration metric: took 23.875µs to acquireMachinesLock for "enable-default-cni-819000"
	I0307 10:23:06.566211    5216 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:23:06.566245    5216 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:23:06.570616    5216 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 10:23:06.585986    5216 start.go:159] libmachine.API.Create for "enable-default-cni-819000" (driver="qemu2")
	I0307 10:23:06.586007    5216 client.go:168] LocalClient.Create starting
	I0307 10:23:06.586067    5216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:23:06.586099    5216 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:06.586108    5216 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:06.586151    5216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:23:06.586173    5216 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:06.586178    5216 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:06.586536    5216 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:23:06.724144    5216 main.go:141] libmachine: Creating SSH key...
	I0307 10:23:06.774824    5216 main.go:141] libmachine: Creating Disk image...
	I0307 10:23:06.774829    5216 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:23:06.774983    5216 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/enable-default-cni-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/enable-default-cni-819000/disk.qcow2
	I0307 10:23:06.787024    5216 main.go:141] libmachine: STDOUT: 
	I0307 10:23:06.787044    5216 main.go:141] libmachine: STDERR: 
	I0307 10:23:06.787098    5216 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/enable-default-cni-819000/disk.qcow2 +20000M
	I0307 10:23:06.798249    5216 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:23:06.798271    5216 main.go:141] libmachine: STDERR: 
	I0307 10:23:06.798285    5216 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/enable-default-cni-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/enable-default-cni-819000/disk.qcow2
	I0307 10:23:06.798289    5216 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:23:06.798337    5216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/enable-default-cni-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/enable-default-cni-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/enable-default-cni-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:7e:ae:c9:69:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/enable-default-cni-819000/disk.qcow2
	I0307 10:23:06.800171    5216 main.go:141] libmachine: STDOUT: 
	I0307 10:23:06.800187    5216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:23:06.800205    5216 client.go:171] duration metric: took 214.200292ms to LocalClient.Create
	I0307 10:23:08.802345    5216 start.go:128] duration metric: took 2.236156959s to createHost
	I0307 10:23:08.802419    5216 start.go:83] releasing machines lock for "enable-default-cni-819000", held for 2.236285583s
	W0307 10:23:08.802449    5216 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:23:08.812682    5216 out.go:177] * Deleting "enable-default-cni-819000" in qemu2 ...
	W0307 10:23:08.831689    5216 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:23:08.831726    5216 start.go:728] Will try again in 5 seconds ...
	I0307 10:23:13.831927    5216 start.go:360] acquireMachinesLock for enable-default-cni-819000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:23:13.832241    5216 start.go:364] duration metric: took 262.542µs to acquireMachinesLock for "enable-default-cni-819000"
	I0307 10:23:13.832320    5216 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:23:13.832472    5216 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:23:13.839811    5216 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 10:23:13.875241    5216 start.go:159] libmachine.API.Create for "enable-default-cni-819000" (driver="qemu2")
	I0307 10:23:13.875286    5216 client.go:168] LocalClient.Create starting
	I0307 10:23:13.875387    5216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:23:13.875445    5216 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:13.875462    5216 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:13.875515    5216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:23:13.875552    5216 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:13.875565    5216 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:13.876029    5216 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:23:14.019955    5216 main.go:141] libmachine: Creating SSH key...
	I0307 10:23:14.140098    5216 main.go:141] libmachine: Creating Disk image...
	I0307 10:23:14.140108    5216 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:23:14.140310    5216 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/enable-default-cni-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/enable-default-cni-819000/disk.qcow2
	I0307 10:23:14.154046    5216 main.go:141] libmachine: STDOUT: 
	I0307 10:23:14.154068    5216 main.go:141] libmachine: STDERR: 
	I0307 10:23:14.154136    5216 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/enable-default-cni-819000/disk.qcow2 +20000M
	I0307 10:23:14.166808    5216 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:23:14.166827    5216 main.go:141] libmachine: STDERR: 
	I0307 10:23:14.166841    5216 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/enable-default-cni-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/enable-default-cni-819000/disk.qcow2
	I0307 10:23:14.166845    5216 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:23:14.166877    5216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/enable-default-cni-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/enable-default-cni-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/enable-default-cni-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:5b:72:ba:de:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/enable-default-cni-819000/disk.qcow2
	I0307 10:23:14.169065    5216 main.go:141] libmachine: STDOUT: 
	I0307 10:23:14.169086    5216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:23:14.169099    5216 client.go:171] duration metric: took 293.818625ms to LocalClient.Create
	I0307 10:23:16.171275    5216 start.go:128] duration metric: took 2.338840875s to createHost
	I0307 10:23:16.171387    5216 start.go:83] releasing machines lock for "enable-default-cni-819000", held for 2.339203s
	W0307 10:23:16.171733    5216 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:23:16.181333    5216 out.go:177] 
	W0307 10:23:16.188287    5216 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:23:16.188388    5216 out.go:239] * 
	* 
	W0307 10:23:16.191141    5216 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:23:16.199297    5216 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-819000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-819000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.765495667s)

                                                
                                                
-- stdout --
	* [flannel-819000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-819000" primary control-plane node in "flannel-819000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-819000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:23:18.500780    5329 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:23:18.500918    5329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:23:18.500923    5329 out.go:304] Setting ErrFile to fd 2...
	I0307 10:23:18.500925    5329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:23:18.501093    5329 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:23:18.502261    5329 out.go:298] Setting JSON to false
	I0307 10:23:18.519029    5329 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4970,"bootTime":1709830828,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:23:18.519099    5329 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:23:18.524074    5329 out.go:177] * [flannel-819000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:23:18.531910    5329 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:23:18.533642    5329 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:23:18.531985    5329 notify.go:220] Checking for updates...
	I0307 10:23:18.539939    5329 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:23:18.542968    5329 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:23:18.545939    5329 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:23:18.548958    5329 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:23:18.552318    5329 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:23:18.552379    5329 config.go:182] Loaded profile config "stopped-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:23:18.552426    5329 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:23:18.556955    5329 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:23:18.563930    5329 start.go:297] selected driver: qemu2
	I0307 10:23:18.563935    5329 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:23:18.563940    5329 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:23:18.566194    5329 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:23:18.569923    5329 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:23:18.573023    5329 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:23:18.573069    5329 cni.go:84] Creating CNI manager for "flannel"
	I0307 10:23:18.573073    5329 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0307 10:23:18.573105    5329 start.go:340] cluster config:
	{Name:flannel-819000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:flannel-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:23:18.577563    5329 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:23:18.582887    5329 out.go:177] * Starting "flannel-819000" primary control-plane node in "flannel-819000" cluster
	I0307 10:23:18.586906    5329 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:23:18.586919    5329 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:23:18.586929    5329 cache.go:56] Caching tarball of preloaded images
	I0307 10:23:18.587022    5329 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:23:18.587028    5329 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:23:18.587097    5329 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/flannel-819000/config.json ...
	I0307 10:23:18.587107    5329 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/flannel-819000/config.json: {Name:mk66d9cfddec9e3fef96226f07e74bea566c74eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:23:18.587367    5329 start.go:360] acquireMachinesLock for flannel-819000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:23:18.587394    5329 start.go:364] duration metric: took 22.417µs to acquireMachinesLock for "flannel-819000"
	I0307 10:23:18.587404    5329 start.go:93] Provisioning new machine with config: &{Name:flannel-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:23:18.587431    5329 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:23:18.590895    5329 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 10:23:18.605271    5329 start.go:159] libmachine.API.Create for "flannel-819000" (driver="qemu2")
	I0307 10:23:18.605302    5329 client.go:168] LocalClient.Create starting
	I0307 10:23:18.605360    5329 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:23:18.605390    5329 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:18.605404    5329 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:18.605447    5329 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:23:18.605468    5329 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:18.605478    5329 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:18.605819    5329 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:23:18.744483    5329 main.go:141] libmachine: Creating SSH key...
	I0307 10:23:18.786732    5329 main.go:141] libmachine: Creating Disk image...
	I0307 10:23:18.786737    5329 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:23:18.786905    5329 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/flannel-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/flannel-819000/disk.qcow2
	I0307 10:23:18.799011    5329 main.go:141] libmachine: STDOUT: 
	I0307 10:23:18.799032    5329 main.go:141] libmachine: STDERR: 
	I0307 10:23:18.799082    5329 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/flannel-819000/disk.qcow2 +20000M
	I0307 10:23:18.809984    5329 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:23:18.810010    5329 main.go:141] libmachine: STDERR: 
	I0307 10:23:18.810027    5329 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/flannel-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/flannel-819000/disk.qcow2
	I0307 10:23:18.810032    5329 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:23:18.810061    5329 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/flannel-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/flannel-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/flannel-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:50:ba:c7:d6:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/flannel-819000/disk.qcow2
	I0307 10:23:18.811846    5329 main.go:141] libmachine: STDOUT: 
	I0307 10:23:18.811862    5329 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:23:18.811885    5329 client.go:171] duration metric: took 206.584375ms to LocalClient.Create
	I0307 10:23:20.813664    5329 start.go:128] duration metric: took 2.226296459s to createHost
	I0307 10:23:20.813722    5329 start.go:83] releasing machines lock for "flannel-819000", held for 2.22639575s
	W0307 10:23:20.813745    5329 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:23:20.825662    5329 out.go:177] * Deleting "flannel-819000" in qemu2 ...
	W0307 10:23:20.841294    5329 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:23:20.841314    5329 start.go:728] Will try again in 5 seconds ...
	I0307 10:23:25.843396    5329 start.go:360] acquireMachinesLock for flannel-819000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:23:25.843861    5329 start.go:364] duration metric: took 364.5µs to acquireMachinesLock for "flannel-819000"
	I0307 10:23:25.843947    5329 start.go:93] Provisioning new machine with config: &{Name:flannel-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:23:25.844223    5329 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:23:25.851865    5329 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 10:23:25.889580    5329 start.go:159] libmachine.API.Create for "flannel-819000" (driver="qemu2")
	I0307 10:23:25.889621    5329 client.go:168] LocalClient.Create starting
	I0307 10:23:25.889724    5329 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:23:25.889781    5329 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:25.889802    5329 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:25.889863    5329 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:23:25.889900    5329 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:25.889911    5329 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:25.890366    5329 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:23:26.046136    5329 main.go:141] libmachine: Creating SSH key...
	I0307 10:23:26.162302    5329 main.go:141] libmachine: Creating Disk image...
	I0307 10:23:26.162310    5329 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:23:26.162503    5329 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/flannel-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/flannel-819000/disk.qcow2
	I0307 10:23:26.174817    5329 main.go:141] libmachine: STDOUT: 
	I0307 10:23:26.174838    5329 main.go:141] libmachine: STDERR: 
	I0307 10:23:26.174890    5329 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/flannel-819000/disk.qcow2 +20000M
	I0307 10:23:26.185587    5329 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:23:26.185615    5329 main.go:141] libmachine: STDERR: 
	I0307 10:23:26.185631    5329 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/flannel-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/flannel-819000/disk.qcow2
	I0307 10:23:26.185636    5329 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:23:26.185671    5329 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/flannel-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/flannel-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/flannel-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:5a:93:95:45:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/flannel-819000/disk.qcow2
	I0307 10:23:26.187452    5329 main.go:141] libmachine: STDOUT: 
	I0307 10:23:26.187471    5329 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:23:26.187487    5329 client.go:171] duration metric: took 297.871208ms to LocalClient.Create
	I0307 10:23:28.189607    5329 start.go:128] duration metric: took 2.34543275s to createHost
	I0307 10:23:28.189682    5329 start.go:83] releasing machines lock for "flannel-819000", held for 2.345853583s
	W0307 10:23:28.190013    5329 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:23:28.199762    5329 out.go:177] 
	W0307 10:23:28.205935    5329 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:23:28.205995    5329 out.go:239] * 
	* 
	W0307 10:23:28.208562    5329 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:23:28.215497    5329 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-819000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-819000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.765308458s)

                                                
                                                
-- stdout --
	* [bridge-819000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-819000" primary control-plane node in "bridge-819000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-819000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:23:30.858270    5451 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:23:30.858404    5451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:23:30.858408    5451 out.go:304] Setting ErrFile to fd 2...
	I0307 10:23:30.858410    5451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:23:30.858537    5451 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:23:30.859711    5451 out.go:298] Setting JSON to false
	I0307 10:23:30.875928    5451 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4982,"bootTime":1709830828,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:23:30.875986    5451 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:23:30.880550    5451 out.go:177] * [bridge-819000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:23:30.888570    5451 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:23:30.892524    5451 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:23:30.888627    5451 notify.go:220] Checking for updates...
	I0307 10:23:30.906004    5451 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:23:30.909521    5451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:23:30.912555    5451 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:23:30.915551    5451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:23:30.918831    5451 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:23:30.918895    5451 config.go:182] Loaded profile config "stopped-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:23:30.918950    5451 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:23:30.923516    5451 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:23:30.930484    5451 start.go:297] selected driver: qemu2
	I0307 10:23:30.930489    5451 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:23:30.930495    5451 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:23:30.932628    5451 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:23:30.935506    5451 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:23:30.938562    5451 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:23:30.938592    5451 cni.go:84] Creating CNI manager for "bridge"
	I0307 10:23:30.938597    5451 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 10:23:30.938627    5451 start.go:340] cluster config:
	{Name:bridge-819000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:23:30.942632    5451 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:23:30.949515    5451 out.go:177] * Starting "bridge-819000" primary control-plane node in "bridge-819000" cluster
	I0307 10:23:30.953447    5451 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:23:30.953467    5451 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:23:30.953475    5451 cache.go:56] Caching tarball of preloaded images
	I0307 10:23:30.953524    5451 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:23:30.953530    5451 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:23:30.953587    5451 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/bridge-819000/config.json ...
	I0307 10:23:30.953598    5451 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/bridge-819000/config.json: {Name:mk5fbaa06e533fbabe55f9c61ef0d519bd452e28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:23:30.953909    5451 start.go:360] acquireMachinesLock for bridge-819000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:23:30.953948    5451 start.go:364] duration metric: took 28.208µs to acquireMachinesLock for "bridge-819000"
	I0307 10:23:30.953959    5451 start.go:93] Provisioning new machine with config: &{Name:bridge-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:23:30.953995    5451 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:23:30.958558    5451 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 10:23:30.973948    5451 start.go:159] libmachine.API.Create for "bridge-819000" (driver="qemu2")
	I0307 10:23:30.973971    5451 client.go:168] LocalClient.Create starting
	I0307 10:23:30.974027    5451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:23:30.974054    5451 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:30.974063    5451 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:30.974105    5451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:23:30.974126    5451 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:30.974135    5451 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:30.974518    5451 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:23:31.110575    5451 main.go:141] libmachine: Creating SSH key...
	I0307 10:23:31.188901    5451 main.go:141] libmachine: Creating Disk image...
	I0307 10:23:31.188913    5451 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:23:31.189090    5451 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/bridge-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/bridge-819000/disk.qcow2
	I0307 10:23:31.201417    5451 main.go:141] libmachine: STDOUT: 
	I0307 10:23:31.201434    5451 main.go:141] libmachine: STDERR: 
	I0307 10:23:31.201482    5451 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/bridge-819000/disk.qcow2 +20000M
	I0307 10:23:31.212796    5451 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:23:31.212820    5451 main.go:141] libmachine: STDERR: 
	I0307 10:23:31.212833    5451 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/bridge-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/bridge-819000/disk.qcow2
	I0307 10:23:31.212838    5451 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:23:31.212876    5451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/bridge-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/bridge-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/bridge-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:ce:5c:08:f3:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/bridge-819000/disk.qcow2
	I0307 10:23:31.214906    5451 main.go:141] libmachine: STDOUT: 
	I0307 10:23:31.214921    5451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:23:31.214940    5451 client.go:171] duration metric: took 240.971542ms to LocalClient.Create
	I0307 10:23:33.217126    5451 start.go:128] duration metric: took 2.263178292s to createHost
	I0307 10:23:33.217226    5451 start.go:83] releasing machines lock for "bridge-819000", held for 2.2633455s
	W0307 10:23:33.217276    5451 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:23:33.228045    5451 out.go:177] * Deleting "bridge-819000" in qemu2 ...
	W0307 10:23:33.257864    5451 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:23:33.257910    5451 start.go:728] Will try again in 5 seconds ...
	I0307 10:23:38.259939    5451 start.go:360] acquireMachinesLock for bridge-819000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:23:38.260422    5451 start.go:364] duration metric: took 345.25µs to acquireMachinesLock for "bridge-819000"
	I0307 10:23:38.260557    5451 start.go:93] Provisioning new machine with config: &{Name:bridge-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:23:38.260839    5451 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:23:38.268434    5451 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 10:23:38.317725    5451 start.go:159] libmachine.API.Create for "bridge-819000" (driver="qemu2")
	I0307 10:23:38.317778    5451 client.go:168] LocalClient.Create starting
	I0307 10:23:38.317895    5451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:23:38.317962    5451 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:38.317980    5451 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:38.318046    5451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:23:38.318094    5451 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:38.318106    5451 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:38.318804    5451 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:23:38.467015    5451 main.go:141] libmachine: Creating SSH key...
	I0307 10:23:38.521724    5451 main.go:141] libmachine: Creating Disk image...
	I0307 10:23:38.521733    5451 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:23:38.521926    5451 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/bridge-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/bridge-819000/disk.qcow2
	I0307 10:23:38.534591    5451 main.go:141] libmachine: STDOUT: 
	I0307 10:23:38.534611    5451 main.go:141] libmachine: STDERR: 
	I0307 10:23:38.534678    5451 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/bridge-819000/disk.qcow2 +20000M
	I0307 10:23:38.546144    5451 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:23:38.546164    5451 main.go:141] libmachine: STDERR: 
	I0307 10:23:38.546175    5451 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/bridge-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/bridge-819000/disk.qcow2
	I0307 10:23:38.546181    5451 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:23:38.546230    5451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/bridge-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/bridge-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/bridge-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:d1:3d:cb:b5:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/bridge-819000/disk.qcow2
	I0307 10:23:38.548082    5451 main.go:141] libmachine: STDOUT: 
	I0307 10:23:38.548099    5451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:23:38.548112    5451 client.go:171] duration metric: took 230.337ms to LocalClient.Create
	I0307 10:23:40.550165    5451 start.go:128] duration metric: took 2.289377834s to createHost
	I0307 10:23:40.550221    5451 start.go:83] releasing machines lock for "bridge-819000", held for 2.289851375s
	W0307 10:23:40.550424    5451 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:23:40.562754    5451 out.go:177] 
	W0307 10:23:40.565964    5451 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:23:40.565999    5451 out.go:239] * 
	* 
	W0307 10:23:40.567369    5451 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:23:40.578670    5451 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-819000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-819000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.903612s)

                                                
                                                
-- stdout --
	* [kubenet-819000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-819000" primary control-plane node in "kubenet-819000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-819000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:23:42.866505    5564 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:23:42.866625    5564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:23:42.866629    5564 out.go:304] Setting ErrFile to fd 2...
	I0307 10:23:42.866632    5564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:23:42.866750    5564 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:23:42.867873    5564 out.go:298] Setting JSON to false
	I0307 10:23:42.884327    5564 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4994,"bootTime":1709830828,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:23:42.884395    5564 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:23:42.890687    5564 out.go:177] * [kubenet-819000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:23:42.897413    5564 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:23:42.897431    5564 notify.go:220] Checking for updates...
	I0307 10:23:42.903292    5564 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:23:42.906384    5564 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:23:42.910419    5564 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:23:42.913377    5564 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:23:42.916360    5564 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:23:42.924723    5564 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:23:42.924787    5564 config.go:182] Loaded profile config "stopped-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:23:42.924833    5564 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:23:42.929338    5564 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:23:42.936342    5564 start.go:297] selected driver: qemu2
	I0307 10:23:42.936347    5564 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:23:42.936352    5564 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:23:42.938617    5564 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:23:42.941377    5564 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:23:42.945395    5564 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:23:42.945431    5564 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0307 10:23:42.945468    5564 start.go:340] cluster config:
	{Name:kubenet-819000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:23:42.950036    5564 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:23:42.958384    5564 out.go:177] * Starting "kubenet-819000" primary control-plane node in "kubenet-819000" cluster
	I0307 10:23:42.962338    5564 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:23:42.962352    5564 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:23:42.962363    5564 cache.go:56] Caching tarball of preloaded images
	I0307 10:23:42.962424    5564 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:23:42.962430    5564 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:23:42.962515    5564 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/kubenet-819000/config.json ...
	I0307 10:23:42.962529    5564 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/kubenet-819000/config.json: {Name:mkc6b2d9a78758742bd24da6b65dd5cadc786641 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:23:42.962738    5564 start.go:360] acquireMachinesLock for kubenet-819000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:23:42.962769    5564 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "kubenet-819000"
	I0307 10:23:42.962781    5564 start.go:93] Provisioning new machine with config: &{Name:kubenet-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:23:42.962811    5564 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:23:42.967324    5564 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 10:23:42.984568    5564 start.go:159] libmachine.API.Create for "kubenet-819000" (driver="qemu2")
	I0307 10:23:42.984594    5564 client.go:168] LocalClient.Create starting
	I0307 10:23:42.984666    5564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:23:42.984696    5564 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:42.984704    5564 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:42.984754    5564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:23:42.984777    5564 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:42.984787    5564 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:42.985158    5564 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:23:43.118659    5564 main.go:141] libmachine: Creating SSH key...
	I0307 10:23:43.270183    5564 main.go:141] libmachine: Creating Disk image...
	I0307 10:23:43.270192    5564 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:23:43.270365    5564 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubenet-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubenet-819000/disk.qcow2
	I0307 10:23:43.282818    5564 main.go:141] libmachine: STDOUT: 
	I0307 10:23:43.282845    5564 main.go:141] libmachine: STDERR: 
	I0307 10:23:43.282908    5564 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubenet-819000/disk.qcow2 +20000M
	I0307 10:23:43.294269    5564 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:23:43.294289    5564 main.go:141] libmachine: STDERR: 
	I0307 10:23:43.294303    5564 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubenet-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubenet-819000/disk.qcow2
	I0307 10:23:43.294307    5564 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:23:43.294346    5564 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubenet-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubenet-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubenet-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:0b:46:57:4c:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubenet-819000/disk.qcow2
	I0307 10:23:43.296269    5564 main.go:141] libmachine: STDOUT: 
	I0307 10:23:43.296285    5564 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:23:43.296304    5564 client.go:171] duration metric: took 311.714625ms to LocalClient.Create
	I0307 10:23:45.298514    5564 start.go:128] duration metric: took 2.335748084s to createHost
	I0307 10:23:45.298622    5564 start.go:83] releasing machines lock for "kubenet-819000", held for 2.335919666s
	W0307 10:23:45.298678    5564 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:23:45.313654    5564 out.go:177] * Deleting "kubenet-819000" in qemu2 ...
	W0307 10:23:45.337286    5564 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:23:45.337328    5564 start.go:728] Will try again in 5 seconds ...
	I0307 10:23:50.339434    5564 start.go:360] acquireMachinesLock for kubenet-819000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:23:50.339982    5564 start.go:364] duration metric: took 448.875µs to acquireMachinesLock for "kubenet-819000"
	I0307 10:23:50.340148    5564 start.go:93] Provisioning new machine with config: &{Name:kubenet-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:23:50.340388    5564 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:23:50.349898    5564 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 10:23:50.396543    5564 start.go:159] libmachine.API.Create for "kubenet-819000" (driver="qemu2")
	I0307 10:23:50.396616    5564 client.go:168] LocalClient.Create starting
	I0307 10:23:50.396751    5564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:23:50.396815    5564 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:50.396834    5564 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:50.396906    5564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:23:50.396949    5564 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:50.396962    5564 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:50.397546    5564 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:23:50.543277    5564 main.go:141] libmachine: Creating SSH key...
	I0307 10:23:50.671179    5564 main.go:141] libmachine: Creating Disk image...
	I0307 10:23:50.671187    5564 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:23:50.671376    5564 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubenet-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubenet-819000/disk.qcow2
	I0307 10:23:50.683531    5564 main.go:141] libmachine: STDOUT: 
	I0307 10:23:50.683564    5564 main.go:141] libmachine: STDERR: 
	I0307 10:23:50.683615    5564 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubenet-819000/disk.qcow2 +20000M
	I0307 10:23:50.694361    5564 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:23:50.694383    5564 main.go:141] libmachine: STDERR: 
	I0307 10:23:50.694404    5564 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubenet-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubenet-819000/disk.qcow2
	I0307 10:23:50.694415    5564 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:23:50.694453    5564 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubenet-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubenet-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubenet-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:13:f3:07:de:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/kubenet-819000/disk.qcow2
	I0307 10:23:50.696177    5564 main.go:141] libmachine: STDOUT: 
	I0307 10:23:50.696192    5564 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:23:50.696206    5564 client.go:171] duration metric: took 299.581417ms to LocalClient.Create
	I0307 10:23:52.698350    5564 start.go:128] duration metric: took 2.357999958s to createHost
	I0307 10:23:52.698446    5564 start.go:83] releasing machines lock for "kubenet-819000", held for 2.358515208s
	W0307 10:23:52.698857    5564 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:23:52.709613    5564 out.go:177] 
	W0307 10:23:52.714593    5564 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:23:52.714635    5564 out.go:239] * 
	* 
	W0307 10:23:52.716600    5564 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:23:52.725572    5564 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-658000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-658000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.783816708s)

                                                
                                                
-- stdout --
	* [old-k8s-version-658000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-658000" primary control-plane node in "old-k8s-version-658000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-658000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:23:55.011955    5685 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:23:55.012087    5685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:23:55.012090    5685 out.go:304] Setting ErrFile to fd 2...
	I0307 10:23:55.012093    5685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:23:55.012213    5685 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:23:55.013333    5685 out.go:298] Setting JSON to false
	I0307 10:23:55.029767    5685 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5007,"bootTime":1709830828,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:23:55.029837    5685 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:23:55.035658    5685 out.go:177] * [old-k8s-version-658000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:23:55.042543    5685 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:23:55.042655    5685 notify.go:220] Checking for updates...
	I0307 10:23:55.050489    5685 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:23:55.053526    5685 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:23:55.056618    5685 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:23:55.059597    5685 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:23:55.062605    5685 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:23:55.065893    5685 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:23:55.065956    5685 config.go:182] Loaded profile config "stopped-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0307 10:23:55.066005    5685 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:23:55.069475    5685 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:23:55.076578    5685 start.go:297] selected driver: qemu2
	I0307 10:23:55.076582    5685 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:23:55.076587    5685 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:23:55.078879    5685 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:23:55.082516    5685 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:23:55.085652    5685 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:23:55.085692    5685 cni.go:84] Creating CNI manager for ""
	I0307 10:23:55.085699    5685 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0307 10:23:55.085725    5685 start.go:340] cluster config:
	{Name:old-k8s-version-658000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-658000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:23:55.090139    5685 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:23:55.097563    5685 out.go:177] * Starting "old-k8s-version-658000" primary control-plane node in "old-k8s-version-658000" cluster
	I0307 10:23:55.101596    5685 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 10:23:55.101611    5685 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 10:23:55.101624    5685 cache.go:56] Caching tarball of preloaded images
	I0307 10:23:55.101694    5685 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:23:55.101700    5685 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0307 10:23:55.101770    5685 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/old-k8s-version-658000/config.json ...
	I0307 10:23:55.101781    5685 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/old-k8s-version-658000/config.json: {Name:mk7875d226c5179a1d10a7e6ccdbcc3f7bfd2f8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:23:55.101982    5685 start.go:360] acquireMachinesLock for old-k8s-version-658000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:23:55.102013    5685 start.go:364] duration metric: took 22.333µs to acquireMachinesLock for "old-k8s-version-658000"
	I0307 10:23:55.102022    5685 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-658000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-658000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:23:55.102053    5685 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:23:55.109605    5685 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 10:23:55.124772    5685 start.go:159] libmachine.API.Create for "old-k8s-version-658000" (driver="qemu2")
	I0307 10:23:55.124796    5685 client.go:168] LocalClient.Create starting
	I0307 10:23:55.124862    5685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:23:55.124890    5685 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:55.124902    5685 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:55.124944    5685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:23:55.124966    5685 main.go:141] libmachine: Decoding PEM data...
	I0307 10:23:55.124973    5685 main.go:141] libmachine: Parsing certificate...
	I0307 10:23:55.125324    5685 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:23:55.261534    5685 main.go:141] libmachine: Creating SSH key...
	I0307 10:23:55.369117    5685 main.go:141] libmachine: Creating Disk image...
	I0307 10:23:55.369125    5685 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:23:55.369298    5685 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/disk.qcow2
	I0307 10:23:55.381592    5685 main.go:141] libmachine: STDOUT: 
	I0307 10:23:55.381616    5685 main.go:141] libmachine: STDERR: 
	I0307 10:23:55.381689    5685 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/disk.qcow2 +20000M
	I0307 10:23:55.392391    5685 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:23:55.392409    5685 main.go:141] libmachine: STDERR: 
	I0307 10:23:55.392432    5685 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/disk.qcow2
	I0307 10:23:55.392449    5685 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:23:55.392485    5685 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:f9:37:d4:94:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/disk.qcow2
	I0307 10:23:55.394198    5685 main.go:141] libmachine: STDOUT: 
	I0307 10:23:55.394216    5685 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:23:55.394237    5685 client.go:171] duration metric: took 269.442416ms to LocalClient.Create
	I0307 10:23:57.396422    5685 start.go:128] duration metric: took 2.294412125s to createHost
	I0307 10:23:57.396529    5685 start.go:83] releasing machines lock for "old-k8s-version-658000", held for 2.294582042s
	W0307 10:23:57.396592    5685 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:23:57.407908    5685 out.go:177] * Deleting "old-k8s-version-658000" in qemu2 ...
	W0307 10:23:57.431222    5685 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:23:57.431264    5685 start.go:728] Will try again in 5 seconds ...
	I0307 10:24:02.433359    5685 start.go:360] acquireMachinesLock for old-k8s-version-658000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:24:02.433887    5685 start.go:364] duration metric: took 394µs to acquireMachinesLock for "old-k8s-version-658000"
	I0307 10:24:02.434050    5685 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-658000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-658000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:24:02.434306    5685 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:24:02.444044    5685 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 10:24:02.492657    5685 start.go:159] libmachine.API.Create for "old-k8s-version-658000" (driver="qemu2")
	I0307 10:24:02.492706    5685 client.go:168] LocalClient.Create starting
	I0307 10:24:02.492833    5685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:24:02.492888    5685 main.go:141] libmachine: Decoding PEM data...
	I0307 10:24:02.492904    5685 main.go:141] libmachine: Parsing certificate...
	I0307 10:24:02.492975    5685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:24:02.493017    5685 main.go:141] libmachine: Decoding PEM data...
	I0307 10:24:02.493032    5685 main.go:141] libmachine: Parsing certificate...
	I0307 10:24:02.493518    5685 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:24:02.639853    5685 main.go:141] libmachine: Creating SSH key...
	I0307 10:24:02.695485    5685 main.go:141] libmachine: Creating Disk image...
	I0307 10:24:02.695491    5685 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:24:02.695675    5685 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/disk.qcow2
	I0307 10:24:02.709496    5685 main.go:141] libmachine: STDOUT: 
	I0307 10:24:02.709520    5685 main.go:141] libmachine: STDERR: 
	I0307 10:24:02.709597    5685 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/disk.qcow2 +20000M
	I0307 10:24:02.722107    5685 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:24:02.722131    5685 main.go:141] libmachine: STDERR: 
	I0307 10:24:02.722145    5685 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/disk.qcow2
	I0307 10:24:02.722149    5685 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:24:02.722188    5685 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:4f:9e:9f:f0:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/disk.qcow2
	I0307 10:24:02.723865    5685 main.go:141] libmachine: STDOUT: 
	I0307 10:24:02.723883    5685 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:24:02.723913    5685 client.go:171] duration metric: took 231.189ms to LocalClient.Create
	I0307 10:24:04.726061    5685 start.go:128] duration metric: took 2.291760667s to createHost
	I0307 10:24:04.726137    5685 start.go:83] releasing machines lock for "old-k8s-version-658000", held for 2.292271209s
	W0307 10:24:04.726481    5685 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-658000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-658000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:04.737093    5685 out.go:177] 
	W0307 10:24:04.740234    5685 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:24:04.740276    5685 out.go:239] * 
	* 
	W0307 10:24:04.743648    5685 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:24:04.754072    5685 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-658000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000: exit status 7 (53.949625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-658000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (11.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-594000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-594000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (11.620203083s)

                                                
                                                
-- stdout --
	* [no-preload-594000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-594000" primary control-plane node in "no-preload-594000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-594000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:24:02.879803    5703 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:24:02.879996    5703 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:02.879999    5703 out.go:304] Setting ErrFile to fd 2...
	I0307 10:24:02.880001    5703 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:02.880133    5703 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:24:02.881177    5703 out.go:298] Setting JSON to false
	I0307 10:24:02.897275    5703 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5014,"bootTime":1709830828,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:24:02.897332    5703 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:24:02.900969    5703 out.go:177] * [no-preload-594000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:24:02.907887    5703 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:24:02.907906    5703 notify.go:220] Checking for updates...
	I0307 10:24:02.914934    5703 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:24:02.917936    5703 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:24:02.920954    5703 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:24:02.923814    5703 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:24:02.926944    5703 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:24:02.930326    5703 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:24:02.930400    5703 config.go:182] Loaded profile config "old-k8s-version-658000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0307 10:24:02.930458    5703 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:24:02.932904    5703 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:24:02.939894    5703 start.go:297] selected driver: qemu2
	I0307 10:24:02.939900    5703 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:24:02.939906    5703 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:24:02.942175    5703 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:24:02.943549    5703 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:24:02.947023    5703 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:24:02.947076    5703 cni.go:84] Creating CNI manager for ""
	I0307 10:24:02.947084    5703 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:24:02.947099    5703 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 10:24:02.947122    5703 start.go:340] cluster config:
	{Name:no-preload-594000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-594000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:24:02.951560    5703 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:02.958891    5703 out.go:177] * Starting "no-preload-594000" primary control-plane node in "no-preload-594000" cluster
	I0307 10:24:02.962909    5703 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 10:24:02.962992    5703 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/no-preload-594000/config.json ...
	I0307 10:24:02.963008    5703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/no-preload-594000/config.json: {Name:mkb4464a8997685fd57a51d5b5885adfa12e93a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:24:02.963007    5703 cache.go:107] acquiring lock: {Name:mk55b0c5ddedbe4e05f714622b37932bb306454f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:02.963014    5703 cache.go:107] acquiring lock: {Name:mkaed600b7068dea88d7a0773cc4880ecec73127 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:02.963059    5703 cache.go:107] acquiring lock: {Name:mkd1268bdccb1b421c0a3616ff7e979d0471b45b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:02.963080    5703 cache.go:115] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0307 10:24:02.963089    5703 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 84.292µs
	I0307 10:24:02.963098    5703 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0307 10:24:02.963111    5703 cache.go:107] acquiring lock: {Name:mkd6a5d4c95132c535988dca45b68c147ab46796 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:02.963172    5703 cache.go:107] acquiring lock: {Name:mk5ed0d120c944b1444f48802164d0609b6c2750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:02.963216    5703 cache.go:107] acquiring lock: {Name:mk6ab3dcf4bb3e285e73da762086b70625c4d20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:02.963242    5703 cache.go:107] acquiring lock: {Name:mk975cc6bf1fc9bd8fb3b38cf722f30c8c4847ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:02.963256    5703 cache.go:107] acquiring lock: {Name:mkc791b1d5c98f4be0219cb4d9201ae786462f07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:02.963233    5703 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0307 10:24:02.963267    5703 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0307 10:24:02.963278    5703 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0307 10:24:02.963419    5703 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0307 10:24:02.963481    5703 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0307 10:24:02.963488    5703 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0307 10:24:02.963554    5703 start.go:360] acquireMachinesLock for no-preload-594000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:24:02.963601    5703 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0307 10:24:02.969156    5703 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0307 10:24:02.970133    5703 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0307 10:24:02.970175    5703 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0307 10:24:02.970137    5703 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0307 10:24:02.970185    5703 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0307 10:24:02.970201    5703 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0307 10:24:02.970248    5703 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0307 10:24:04.726336    5703 start.go:364] duration metric: took 1.762812s to acquireMachinesLock for "no-preload-594000"
	I0307 10:24:04.726601    5703 start.go:93] Provisioning new machine with config: &{Name:no-preload-594000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-594000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:24:04.726824    5703 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:24:04.737097    5703 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 10:24:04.787032    5703 start.go:159] libmachine.API.Create for "no-preload-594000" (driver="qemu2")
	I0307 10:24:04.787072    5703 client.go:168] LocalClient.Create starting
	I0307 10:24:04.787159    5703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:24:04.787206    5703 main.go:141] libmachine: Decoding PEM data...
	I0307 10:24:04.787221    5703 main.go:141] libmachine: Parsing certificate...
	I0307 10:24:04.787277    5703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:24:04.787316    5703 main.go:141] libmachine: Decoding PEM data...
	I0307 10:24:04.787326    5703 main.go:141] libmachine: Parsing certificate...
	I0307 10:24:04.787897    5703 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:24:04.889297    5703 cache.go:162] opening:  /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I0307 10:24:04.942769    5703 main.go:141] libmachine: Creating SSH key...
	I0307 10:24:05.008048    5703 cache.go:162] opening:  /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0307 10:24:05.030903    5703 cache.go:162] opening:  /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0307 10:24:05.034851    5703 cache.go:162] opening:  /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0307 10:24:05.048274    5703 main.go:141] libmachine: Creating Disk image...
	I0307 10:24:05.048284    5703 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:24:05.048471    5703 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/disk.qcow2
	I0307 10:24:05.053226    5703 cache.go:162] opening:  /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0307 10:24:05.058614    5703 cache.go:162] opening:  /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0307 10:24:05.061434    5703 main.go:141] libmachine: STDOUT: 
	I0307 10:24:05.061449    5703 main.go:141] libmachine: STDERR: 
	I0307 10:24:05.061503    5703 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/disk.qcow2 +20000M
	I0307 10:24:05.062248    5703 cache.go:162] opening:  /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0307 10:24:05.074756    5703 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:24:05.074770    5703 main.go:141] libmachine: STDERR: 
	I0307 10:24:05.074782    5703 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/disk.qcow2
	I0307 10:24:05.074785    5703 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:24:05.074811    5703 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:51:8e:d6:41:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/disk.qcow2
	I0307 10:24:05.077064    5703 main.go:141] libmachine: STDOUT: 
	I0307 10:24:05.077083    5703 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:24:05.077100    5703 client.go:171] duration metric: took 290.032291ms to LocalClient.Create
	I0307 10:24:05.175703    5703 cache.go:157] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0307 10:24:05.175722    5703 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 2.212752041s
	I0307 10:24:05.175732    5703 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0307 10:24:07.077271    5703 start.go:128] duration metric: took 2.350483833s to createHost
	I0307 10:24:07.077344    5703 start.go:83] releasing machines lock for "no-preload-594000", held for 2.3509895s
	W0307 10:24:07.077438    5703 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:07.089736    5703 out.go:177] * Deleting "no-preload-594000" in qemu2 ...
	W0307 10:24:07.116493    5703 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:07.116528    5703 start.go:728] Will try again in 5 seconds ...
	I0307 10:24:07.546799    5703 cache.go:157] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0307 10:24:07.546887    5703 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 4.583865875s
	I0307 10:24:07.546923    5703 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0307 10:24:08.641364    5703 cache.go:157] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0307 10:24:08.641383    5703 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 5.67832975s
	I0307 10:24:08.641390    5703 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0307 10:24:09.826985    5703 cache.go:157] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0307 10:24:09.827048    5703 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 6.864267375s
	I0307 10:24:09.827081    5703 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0307 10:24:10.383032    5703 cache.go:157] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0307 10:24:10.383092    5703 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 7.420325541s
	I0307 10:24:10.383138    5703 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0307 10:24:10.455156    5703 cache.go:157] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0307 10:24:10.455205    5703 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 7.492225625s
	I0307 10:24:10.455245    5703 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0307 10:24:12.116533    5703 start.go:360] acquireMachinesLock for no-preload-594000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:24:12.116834    5703 start.go:364] duration metric: took 232.667µs to acquireMachinesLock for "no-preload-594000"
	I0307 10:24:12.116934    5703 start.go:93] Provisioning new machine with config: &{Name:no-preload-594000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-594000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:24:12.117179    5703 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:24:12.125667    5703 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 10:24:12.173703    5703 start.go:159] libmachine.API.Create for "no-preload-594000" (driver="qemu2")
	I0307 10:24:12.173826    5703 client.go:168] LocalClient.Create starting
	I0307 10:24:12.173996    5703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:24:12.174065    5703 main.go:141] libmachine: Decoding PEM data...
	I0307 10:24:12.174085    5703 main.go:141] libmachine: Parsing certificate...
	I0307 10:24:12.174147    5703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:24:12.174190    5703 main.go:141] libmachine: Decoding PEM data...
	I0307 10:24:12.174206    5703 main.go:141] libmachine: Parsing certificate...
	I0307 10:24:12.174721    5703 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:24:12.322225    5703 main.go:141] libmachine: Creating SSH key...
	I0307 10:24:12.371488    5703 main.go:141] libmachine: Creating Disk image...
	I0307 10:24:12.371494    5703 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:24:12.371660    5703 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/disk.qcow2
	I0307 10:24:12.384158    5703 main.go:141] libmachine: STDOUT: 
	I0307 10:24:12.384177    5703 main.go:141] libmachine: STDERR: 
	I0307 10:24:12.384225    5703 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/disk.qcow2 +20000M
	I0307 10:24:12.395331    5703 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:24:12.395345    5703 main.go:141] libmachine: STDERR: 
	I0307 10:24:12.395363    5703 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/disk.qcow2
	I0307 10:24:12.395367    5703 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:24:12.395402    5703 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:4f:8b:6c:28:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/disk.qcow2
	I0307 10:24:12.397187    5703 main.go:141] libmachine: STDOUT: 
	I0307 10:24:12.397201    5703 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:24:12.397216    5703 client.go:171] duration metric: took 223.383167ms to LocalClient.Create
	I0307 10:24:13.077480    5703 cache.go:157] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I0307 10:24:13.077550    5703 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 10.114768042s
	I0307 10:24:13.077597    5703 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0307 10:24:13.077730    5703 cache.go:87] Successfully saved all images to host disk.
	I0307 10:24:14.399420    5703 start.go:128] duration metric: took 2.282256208s to createHost
	I0307 10:24:14.399483    5703 start.go:83] releasing machines lock for "no-preload-594000", held for 2.282705041s
	W0307 10:24:14.399803    5703 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-594000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-594000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:14.412348    5703 out.go:177] 
	W0307 10:24:14.425394    5703 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:24:14.425424    5703 out.go:239] * 
	* 
	W0307 10:24:14.428317    5703 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:24:14.439338    5703 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-594000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000: exit status 7 (68.879375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-594000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (11.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-658000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-658000 create -f testdata/busybox.yaml: exit status 1 (31.4155ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-658000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-658000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000: exit status 7 (35.25475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-658000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000: exit status 7 (36.335041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-658000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-658000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-658000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-658000 describe deploy/metrics-server -n kube-system: exit status 1 (27.952833ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-658000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-658000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000: exit status 7 (32.843708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-658000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-658000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
E0307 10:24:13.778073    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-658000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.79574125s)

                                                
                                                
-- stdout --
	* [old-k8s-version-658000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-658000" primary control-plane node in "old-k8s-version-658000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-658000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-658000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:24:08.714204    5777 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:24:08.714328    5777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:08.714331    5777 out.go:304] Setting ErrFile to fd 2...
	I0307 10:24:08.714334    5777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:08.714467    5777 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:24:08.715484    5777 out.go:298] Setting JSON to false
	I0307 10:24:08.731673    5777 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5020,"bootTime":1709830828,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:24:08.731739    5777 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:24:08.734954    5777 out.go:177] * [old-k8s-version-658000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:24:08.747918    5777 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:24:08.743111    5777 notify.go:220] Checking for updates...
	I0307 10:24:08.754987    5777 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:24:08.761969    5777 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:24:08.764933    5777 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:24:08.772019    5777 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:24:08.783020    5777 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:24:08.787257    5777 config.go:182] Loaded profile config "old-k8s-version-658000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0307 10:24:08.791918    5777 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0307 10:24:08.796038    5777 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:24:08.800040    5777 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 10:24:08.806953    5777 start.go:297] selected driver: qemu2
	I0307 10:24:08.806959    5777 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-658000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-658000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:24:08.807026    5777 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:24:08.809622    5777 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:24:08.809683    5777 cni.go:84] Creating CNI manager for ""
	I0307 10:24:08.809690    5777 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0307 10:24:08.809721    5777 start.go:340] cluster config:
	{Name:old-k8s-version-658000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-658000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:24:08.814329    5777 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:08.822952    5777 out.go:177] * Starting "old-k8s-version-658000" primary control-plane node in "old-k8s-version-658000" cluster
	I0307 10:24:08.826055    5777 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 10:24:08.826069    5777 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 10:24:08.826080    5777 cache.go:56] Caching tarball of preloaded images
	I0307 10:24:08.826151    5777 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:24:08.826171    5777 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0307 10:24:08.826251    5777 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/old-k8s-version-658000/config.json ...
	I0307 10:24:08.826680    5777 start.go:360] acquireMachinesLock for old-k8s-version-658000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:24:08.826722    5777 start.go:364] duration metric: took 34.125µs to acquireMachinesLock for "old-k8s-version-658000"
	I0307 10:24:08.826732    5777 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:24:08.826737    5777 fix.go:54] fixHost starting: 
	I0307 10:24:08.826867    5777 fix.go:112] recreateIfNeeded on old-k8s-version-658000: state=Stopped err=<nil>
	W0307 10:24:08.826878    5777 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:24:08.834854    5777 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-658000" ...
	I0307 10:24:08.838069    5777 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:4f:9e:9f:f0:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/disk.qcow2
	I0307 10:24:08.840239    5777 main.go:141] libmachine: STDOUT: 
	I0307 10:24:08.840258    5777 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:24:08.840290    5777 fix.go:56] duration metric: took 13.554292ms for fixHost
	I0307 10:24:08.840296    5777 start.go:83] releasing machines lock for "old-k8s-version-658000", held for 13.568583ms
	W0307 10:24:08.840302    5777 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:24:08.840356    5777 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:08.840361    5777 start.go:728] Will try again in 5 seconds ...
	I0307 10:24:13.842412    5777 start.go:360] acquireMachinesLock for old-k8s-version-658000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:24:14.399664    5777 start.go:364] duration metric: took 557.104833ms to acquireMachinesLock for "old-k8s-version-658000"
	I0307 10:24:14.399872    5777 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:24:14.399888    5777 fix.go:54] fixHost starting: 
	I0307 10:24:14.400562    5777 fix.go:112] recreateIfNeeded on old-k8s-version-658000: state=Stopped err=<nil>
	W0307 10:24:14.400590    5777 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:24:14.421369    5777 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-658000" ...
	I0307 10:24:14.426925    5777 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:4f:9e:9f:f0:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/old-k8s-version-658000/disk.qcow2
	I0307 10:24:14.437399    5777 main.go:141] libmachine: STDOUT: 
	I0307 10:24:14.437496    5777 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:24:14.437584    5777 fix.go:56] duration metric: took 37.694833ms for fixHost
	I0307 10:24:14.437602    5777 start.go:83] releasing machines lock for "old-k8s-version-658000", held for 37.867375ms
	W0307 10:24:14.437766    5777 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-658000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-658000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:14.451342    5777 out.go:177] 
	W0307 10:24:14.455392    5777 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:24:14.455454    5777 out.go:239] * 
	* 
	W0307 10:24:14.458073    5777 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:24:14.469300    5777 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-658000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000: exit status 7 (61.213375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-658000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-594000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-594000 create -f testdata/busybox.yaml: exit status 1 (31.453125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-594000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-594000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000: exit status 7 (32.9475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-594000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000: exit status 7 (34.783208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-594000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-658000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000: exit status 7 (36.162042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-658000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-658000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-658000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-658000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.084333ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-658000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-658000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000: exit status 7 (32.624ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-658000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-594000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-594000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-594000 describe deploy/metrics-server -n kube-system: exit status 1 (30.74925ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-594000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-594000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000: exit status 7 (34.62675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-594000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-658000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000: exit status 7 (32.648166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-658000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-658000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-658000 --alsologtostderr -v=1: exit status 83 (50.517709ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-658000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-658000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:24:14.765774    5811 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:24:14.766154    5811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:14.766160    5811 out.go:304] Setting ErrFile to fd 2...
	I0307 10:24:14.766162    5811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:14.766291    5811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:24:14.766489    5811 out.go:298] Setting JSON to false
	I0307 10:24:14.766496    5811 mustload.go:65] Loading cluster: old-k8s-version-658000
	I0307 10:24:14.766692    5811 config.go:182] Loaded profile config "old-k8s-version-658000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0307 10:24:14.768490    5811 out.go:177] * The control-plane node old-k8s-version-658000 host is not running: state=Stopped
	I0307 10:24:14.776039    5811 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-658000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-658000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000: exit status 7 (33.550291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-658000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000: exit status 7 (30.405416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-658000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-138000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-138000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (10.063023834s)

                                                
                                                
-- stdout --
	* [embed-certs-138000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-138000" primary control-plane node in "embed-certs-138000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-138000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:24:15.238736    5842 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:24:15.238898    5842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:15.238901    5842 out.go:304] Setting ErrFile to fd 2...
	I0307 10:24:15.238903    5842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:15.239036    5842 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:24:15.240061    5842 out.go:298] Setting JSON to false
	I0307 10:24:15.256085    5842 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5027,"bootTime":1709830828,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:24:15.256152    5842 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:24:15.261144    5842 out.go:177] * [embed-certs-138000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:24:15.267996    5842 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:24:15.268053    5842 notify.go:220] Checking for updates...
	I0307 10:24:15.275102    5842 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:24:15.276484    5842 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:24:15.279120    5842 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:24:15.282156    5842 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:24:15.285135    5842 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:24:15.288483    5842 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:24:15.288555    5842 config.go:182] Loaded profile config "no-preload-594000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0307 10:24:15.288596    5842 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:24:15.293162    5842 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:24:15.300054    5842 start.go:297] selected driver: qemu2
	I0307 10:24:15.300060    5842 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:24:15.300065    5842 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:24:15.302354    5842 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:24:15.305095    5842 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:24:15.308189    5842 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:24:15.308226    5842 cni.go:84] Creating CNI manager for ""
	I0307 10:24:15.308233    5842 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:24:15.308243    5842 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 10:24:15.308266    5842 start.go:340] cluster config:
	{Name:embed-certs-138000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:24:15.312716    5842 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:15.320131    5842 out.go:177] * Starting "embed-certs-138000" primary control-plane node in "embed-certs-138000" cluster
	I0307 10:24:15.323067    5842 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:24:15.323087    5842 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:24:15.323098    5842 cache.go:56] Caching tarball of preloaded images
	I0307 10:24:15.323180    5842 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:24:15.323188    5842 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:24:15.323254    5842 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/embed-certs-138000/config.json ...
	I0307 10:24:15.323265    5842 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/embed-certs-138000/config.json: {Name:mk17fa658aa464233c73b613e680b6f0e241493f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:24:15.323495    5842 start.go:360] acquireMachinesLock for embed-certs-138000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:24:15.323528    5842 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "embed-certs-138000"
	I0307 10:24:15.323540    5842 start.go:93] Provisioning new machine with config: &{Name:embed-certs-138000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:24:15.323573    5842 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:24:15.327192    5842 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 10:24:15.344284    5842 start.go:159] libmachine.API.Create for "embed-certs-138000" (driver="qemu2")
	I0307 10:24:15.344312    5842 client.go:168] LocalClient.Create starting
	I0307 10:24:15.344376    5842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:24:15.344407    5842 main.go:141] libmachine: Decoding PEM data...
	I0307 10:24:15.344416    5842 main.go:141] libmachine: Parsing certificate...
	I0307 10:24:15.344459    5842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:24:15.344480    5842 main.go:141] libmachine: Decoding PEM data...
	I0307 10:24:15.344489    5842 main.go:141] libmachine: Parsing certificate...
	I0307 10:24:15.344890    5842 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:24:15.481709    5842 main.go:141] libmachine: Creating SSH key...
	I0307 10:24:15.722627    5842 main.go:141] libmachine: Creating Disk image...
	I0307 10:24:15.722640    5842 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:24:15.722838    5842 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/disk.qcow2
	I0307 10:24:15.736097    5842 main.go:141] libmachine: STDOUT: 
	I0307 10:24:15.736116    5842 main.go:141] libmachine: STDERR: 
	I0307 10:24:15.736169    5842 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/disk.qcow2 +20000M
	I0307 10:24:15.747067    5842 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:24:15.747093    5842 main.go:141] libmachine: STDERR: 
	I0307 10:24:15.747106    5842 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/disk.qcow2
	I0307 10:24:15.747111    5842 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:24:15.747142    5842 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:6a:83:cd:d4:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/disk.qcow2
	I0307 10:24:15.748957    5842 main.go:141] libmachine: STDOUT: 
	I0307 10:24:15.748971    5842 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:24:15.748993    5842 client.go:171] duration metric: took 404.688792ms to LocalClient.Create
	I0307 10:24:17.749195    5842 start.go:128] duration metric: took 2.42567075s to createHost
	I0307 10:24:17.749291    5842 start.go:83] releasing machines lock for "embed-certs-138000", held for 2.42583275s
	W0307 10:24:17.749354    5842 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:17.756545    5842 out.go:177] * Deleting "embed-certs-138000" in qemu2 ...
	W0307 10:24:17.781838    5842 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:17.781879    5842 start.go:728] Will try again in 5 seconds ...
	I0307 10:24:22.783881    5842 start.go:360] acquireMachinesLock for embed-certs-138000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:24:22.784409    5842 start.go:364] duration metric: took 386.208µs to acquireMachinesLock for "embed-certs-138000"
	I0307 10:24:22.784559    5842 start.go:93] Provisioning new machine with config: &{Name:embed-certs-138000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:24:22.785037    5842 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:24:22.794623    5842 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 10:24:22.846261    5842 start.go:159] libmachine.API.Create for "embed-certs-138000" (driver="qemu2")
	I0307 10:24:22.846310    5842 client.go:168] LocalClient.Create starting
	I0307 10:24:22.846414    5842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:24:22.846469    5842 main.go:141] libmachine: Decoding PEM data...
	I0307 10:24:22.846487    5842 main.go:141] libmachine: Parsing certificate...
	I0307 10:24:22.846553    5842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:24:22.846601    5842 main.go:141] libmachine: Decoding PEM data...
	I0307 10:24:22.846615    5842 main.go:141] libmachine: Parsing certificate...
	I0307 10:24:22.847123    5842 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:24:22.995004    5842 main.go:141] libmachine: Creating SSH key...
	I0307 10:24:23.183547    5842 main.go:141] libmachine: Creating Disk image...
	I0307 10:24:23.183553    5842 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:24:23.183737    5842 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/disk.qcow2
	I0307 10:24:23.196498    5842 main.go:141] libmachine: STDOUT: 
	I0307 10:24:23.196522    5842 main.go:141] libmachine: STDERR: 
	I0307 10:24:23.196578    5842 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/disk.qcow2 +20000M
	I0307 10:24:23.207316    5842 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:24:23.207333    5842 main.go:141] libmachine: STDERR: 
	I0307 10:24:23.207355    5842 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/disk.qcow2
	I0307 10:24:23.207359    5842 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:24:23.207392    5842 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:f7:00:9b:80:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/disk.qcow2
	I0307 10:24:23.209015    5842 main.go:141] libmachine: STDOUT: 
	I0307 10:24:23.209031    5842 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:24:23.209043    5842 client.go:171] duration metric: took 362.737042ms to LocalClient.Create
	I0307 10:24:25.211207    5842 start.go:128] duration metric: took 2.426208458s to createHost
	I0307 10:24:25.211286    5842 start.go:83] releasing machines lock for "embed-certs-138000", held for 2.426925583s
	W0307 10:24:25.211513    5842 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:25.220521    5842 out.go:177] 
	W0307 10:24:25.230563    5842 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:24:25.230598    5842 out.go:239] * 
	* 
	W0307 10:24:25.232769    5842 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:24:25.247499    5842 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-138000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000: exit status 7 (72.223667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-138000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (6.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-594000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-594000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (6.796907542s)

                                                
                                                
-- stdout --
	* [no-preload-594000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-594000" primary control-plane node in "no-preload-594000" cluster
	* Restarting existing qemu2 VM for "no-preload-594000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-594000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:24:18.523705    5868 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:24:18.523848    5868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:18.523851    5868 out.go:304] Setting ErrFile to fd 2...
	I0307 10:24:18.523854    5868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:18.523992    5868 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:24:18.525019    5868 out.go:298] Setting JSON to false
	I0307 10:24:18.540994    5868 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5030,"bootTime":1709830828,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:24:18.541051    5868 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:24:18.546109    5868 out.go:177] * [no-preload-594000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:24:18.553162    5868 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:24:18.553225    5868 notify.go:220] Checking for updates...
	I0307 10:24:18.560062    5868 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:24:18.564071    5868 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:24:18.567025    5868 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:24:18.574096    5868 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:24:18.577076    5868 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:24:18.580329    5868 config.go:182] Loaded profile config "no-preload-594000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0307 10:24:18.580579    5868 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:24:18.585011    5868 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 10:24:18.592031    5868 start.go:297] selected driver: qemu2
	I0307 10:24:18.592037    5868 start.go:901] validating driver "qemu2" against &{Name:no-preload-594000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-594000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:24:18.592089    5868 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:24:18.594327    5868 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:24:18.594379    5868 cni.go:84] Creating CNI manager for ""
	I0307 10:24:18.594387    5868 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:24:18.594414    5868 start.go:340] cluster config:
	{Name:no-preload-594000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-594000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:24:18.598897    5868 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:18.607119    5868 out.go:177] * Starting "no-preload-594000" primary control-plane node in "no-preload-594000" cluster
	I0307 10:24:18.611061    5868 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 10:24:18.611147    5868 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/no-preload-594000/config.json ...
	I0307 10:24:18.611172    5868 cache.go:107] acquiring lock: {Name:mk55b0c5ddedbe4e05f714622b37932bb306454f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:18.611178    5868 cache.go:107] acquiring lock: {Name:mkaed600b7068dea88d7a0773cc4880ecec73127 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:18.611191    5868 cache.go:107] acquiring lock: {Name:mk5ed0d120c944b1444f48802164d0609b6c2750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:18.611236    5868 cache.go:115] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0307 10:24:18.611242    5868 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 72.375µs
	I0307 10:24:18.611249    5868 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0307 10:24:18.611259    5868 cache.go:115] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0307 10:24:18.611267    5868 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 102.5µs
	I0307 10:24:18.611265    5868 cache.go:107] acquiring lock: {Name:mkd6a5d4c95132c535988dca45b68c147ab46796 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:18.611271    5868 cache.go:115] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0307 10:24:18.611272    5868 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0307 10:24:18.611278    5868 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 112.584µs
	I0307 10:24:18.611288    5868 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0307 10:24:18.611286    5868 cache.go:107] acquiring lock: {Name:mk975cc6bf1fc9bd8fb3b38cf722f30c8c4847ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:18.611305    5868 cache.go:107] acquiring lock: {Name:mkc791b1d5c98f4be0219cb4d9201ae786462f07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:18.611317    5868 cache.go:115] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I0307 10:24:18.611322    5868 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 57.333µs
	I0307 10:24:18.611326    5868 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0307 10:24:18.611327    5868 cache.go:115] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0307 10:24:18.611332    5868 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 47.292µs
	I0307 10:24:18.611339    5868 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0307 10:24:18.611348    5868 cache.go:115] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0307 10:24:18.611353    5868 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 58.041µs
	I0307 10:24:18.611356    5868 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0307 10:24:18.611368    5868 cache.go:107] acquiring lock: {Name:mk6ab3dcf4bb3e285e73da762086b70625c4d20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:18.611389    5868 cache.go:107] acquiring lock: {Name:mkd1268bdccb1b421c0a3616ff7e979d0471b45b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:18.611436    5868 cache.go:115] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0307 10:24:18.611445    5868 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 92.125µs
	I0307 10:24:18.611449    5868 cache.go:115] /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0307 10:24:18.611451    5868 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0307 10:24:18.611454    5868 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 120.583µs
	I0307 10:24:18.611462    5868 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0307 10:24:18.611467    5868 cache.go:87] Successfully saved all images to host disk.
	I0307 10:24:18.611578    5868 start.go:360] acquireMachinesLock for no-preload-594000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:24:18.611616    5868 start.go:364] duration metric: took 31.084µs to acquireMachinesLock for "no-preload-594000"
	I0307 10:24:18.611625    5868 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:24:18.611631    5868 fix.go:54] fixHost starting: 
	I0307 10:24:18.611756    5868 fix.go:112] recreateIfNeeded on no-preload-594000: state=Stopped err=<nil>
	W0307 10:24:18.611765    5868 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:24:18.620031    5868 out.go:177] * Restarting existing qemu2 VM for "no-preload-594000" ...
	I0307 10:24:18.623967    5868 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:4f:8b:6c:28:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/disk.qcow2
	I0307 10:24:18.626112    5868 main.go:141] libmachine: STDOUT: 
	I0307 10:24:18.626133    5868 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:24:18.626163    5868 fix.go:56] duration metric: took 14.532083ms for fixHost
	I0307 10:24:18.626169    5868 start.go:83] releasing machines lock for "no-preload-594000", held for 14.5485ms
	W0307 10:24:18.626177    5868 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:24:18.626202    5868 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:18.626207    5868 start.go:728] Will try again in 5 seconds ...
	I0307 10:24:23.628301    5868 start.go:360] acquireMachinesLock for no-preload-594000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:24:25.211416    5868 start.go:364] duration metric: took 1.583071708s to acquireMachinesLock for "no-preload-594000"
	I0307 10:24:25.211570    5868 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:24:25.211587    5868 fix.go:54] fixHost starting: 
	I0307 10:24:25.212204    5868 fix.go:112] recreateIfNeeded on no-preload-594000: state=Stopped err=<nil>
	W0307 10:24:25.212237    5868 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:24:25.226444    5868 out.go:177] * Restarting existing qemu2 VM for "no-preload-594000" ...
	I0307 10:24:25.234640    5868 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:4f:8b:6c:28:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/no-preload-594000/disk.qcow2
	I0307 10:24:25.244152    5868 main.go:141] libmachine: STDOUT: 
	I0307 10:24:25.244683    5868 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:24:25.244784    5868 fix.go:56] duration metric: took 33.197334ms for fixHost
	I0307 10:24:25.244817    5868 start.go:83] releasing machines lock for "no-preload-594000", held for 33.366084ms
	W0307 10:24:25.245048    5868 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-594000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-594000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:25.257428    5868 out.go:177] 
	W0307 10:24:25.265518    5868 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:24:25.265556    5868 out.go:239] * 
	* 
	W0307 10:24:25.268065    5868 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:24:25.279634    5868 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-594000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000: exit status 7 (60.201833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-594000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (6.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-138000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-138000 create -f testdata/busybox.yaml: exit status 1 (32.130917ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-138000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-138000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000: exit status 7 (32.184209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-138000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000: exit status 7 (34.544542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-138000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-594000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000: exit status 7 (35.123667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-594000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-594000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-594000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-594000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.714916ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-594000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-594000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000: exit status 7 (33.779125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-594000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-138000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-138000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-138000 describe deploy/metrics-server -n kube-system: exit status 1 (28.980167ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-138000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-138000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000: exit status 7 (38.827042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-138000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-594000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000: exit status 7 (33.024959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-594000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-594000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-594000 --alsologtostderr -v=1: exit status 83 (51.458375ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-594000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-594000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:24:25.572022    5901 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:24:25.572180    5901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:25.572187    5901 out.go:304] Setting ErrFile to fd 2...
	I0307 10:24:25.572190    5901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:25.572319    5901 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:24:25.572586    5901 out.go:298] Setting JSON to false
	I0307 10:24:25.572593    5901 mustload.go:65] Loading cluster: no-preload-594000
	I0307 10:24:25.572801    5901 config.go:182] Loaded profile config "no-preload-594000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0307 10:24:25.577266    5901 out.go:177] * The control-plane node no-preload-594000 host is not running: state=Stopped
	I0307 10:24:25.584271    5901 out.go:177]   To start a cluster, run: "minikube start -p no-preload-594000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-594000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000: exit status 7 (33.847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-594000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000: exit status 7 (29.557292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-594000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-056000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-056000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (9.976626s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-056000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-056000" primary control-plane node in "default-k8s-diff-port-056000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-056000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:24:26.275093    5946 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:24:26.275209    5946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:26.275214    5946 out.go:304] Setting ErrFile to fd 2...
	I0307 10:24:26.275216    5946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:26.275333    5946 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:24:26.276409    5946 out.go:298] Setting JSON to false
	I0307 10:24:26.292456    5946 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5038,"bootTime":1709830828,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:24:26.292519    5946 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:24:26.297813    5946 out.go:177] * [default-k8s-diff-port-056000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:24:26.305853    5946 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:24:26.309711    5946 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:24:26.305898    5946 notify.go:220] Checking for updates...
	I0307 10:24:26.312781    5946 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:24:26.315688    5946 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:24:26.319778    5946 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:24:26.322821    5946 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:24:26.326083    5946 config.go:182] Loaded profile config "embed-certs-138000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:24:26.326141    5946 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:24:26.326185    5946 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:24:26.330793    5946 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:24:26.336674    5946 start.go:297] selected driver: qemu2
	I0307 10:24:26.336679    5946 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:24:26.336684    5946 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:24:26.338898    5946 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 10:24:26.341794    5946 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:24:26.344911    5946 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:24:26.344963    5946 cni.go:84] Creating CNI manager for ""
	I0307 10:24:26.344970    5946 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:24:26.344975    5946 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 10:24:26.345006    5946 start.go:340] cluster config:
	{Name:default-k8s-diff-port-056000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-056000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:24:26.349395    5946 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:26.356803    5946 out.go:177] * Starting "default-k8s-diff-port-056000" primary control-plane node in "default-k8s-diff-port-056000" cluster
	I0307 10:24:26.360742    5946 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:24:26.360756    5946 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:24:26.360765    5946 cache.go:56] Caching tarball of preloaded images
	I0307 10:24:26.360817    5946 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:24:26.360823    5946 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:24:26.360894    5946 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/default-k8s-diff-port-056000/config.json ...
	I0307 10:24:26.360906    5946 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/default-k8s-diff-port-056000/config.json: {Name:mkdc991b92521c1dd525a752c8038fbae86020ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:24:26.361143    5946 start.go:360] acquireMachinesLock for default-k8s-diff-port-056000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:24:26.361177    5946 start.go:364] duration metric: took 26.792µs to acquireMachinesLock for "default-k8s-diff-port-056000"
	I0307 10:24:26.361191    5946 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-056000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:24:26.361229    5946 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:24:26.368776    5946 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 10:24:26.386667    5946 start.go:159] libmachine.API.Create for "default-k8s-diff-port-056000" (driver="qemu2")
	I0307 10:24:26.386702    5946 client.go:168] LocalClient.Create starting
	I0307 10:24:26.386763    5946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:24:26.386793    5946 main.go:141] libmachine: Decoding PEM data...
	I0307 10:24:26.386803    5946 main.go:141] libmachine: Parsing certificate...
	I0307 10:24:26.386850    5946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:24:26.386873    5946 main.go:141] libmachine: Decoding PEM data...
	I0307 10:24:26.386879    5946 main.go:141] libmachine: Parsing certificate...
	I0307 10:24:26.387258    5946 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:24:26.527316    5946 main.go:141] libmachine: Creating SSH key...
	I0307 10:24:26.566846    5946 main.go:141] libmachine: Creating Disk image...
	I0307 10:24:26.566851    5946 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:24:26.567004    5946 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/disk.qcow2
	I0307 10:24:26.579090    5946 main.go:141] libmachine: STDOUT: 
	I0307 10:24:26.579107    5946 main.go:141] libmachine: STDERR: 
	I0307 10:24:26.579163    5946 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/disk.qcow2 +20000M
	I0307 10:24:26.589800    5946 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:24:26.589824    5946 main.go:141] libmachine: STDERR: 
	I0307 10:24:26.589842    5946 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/disk.qcow2
	I0307 10:24:26.589851    5946 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:24:26.589877    5946 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:35:2a:d2:a6:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/disk.qcow2
	I0307 10:24:26.591696    5946 main.go:141] libmachine: STDOUT: 
	I0307 10:24:26.591712    5946 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:24:26.591730    5946 client.go:171] duration metric: took 205.029458ms to LocalClient.Create
	I0307 10:24:28.593836    5946 start.go:128] duration metric: took 2.232658625s to createHost
	I0307 10:24:28.593900    5946 start.go:83] releasing machines lock for "default-k8s-diff-port-056000", held for 2.232785292s
	W0307 10:24:28.593942    5946 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:28.606812    5946 out.go:177] * Deleting "default-k8s-diff-port-056000" in qemu2 ...
	W0307 10:24:28.631453    5946 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:28.631532    5946 start.go:728] Will try again in 5 seconds ...
	I0307 10:24:33.633520    5946 start.go:360] acquireMachinesLock for default-k8s-diff-port-056000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:24:33.633876    5946 start.go:364] duration metric: took 275.875µs to acquireMachinesLock for "default-k8s-diff-port-056000"
	I0307 10:24:33.634007    5946 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-056000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:24:33.634284    5946 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:24:33.642892    5946 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 10:24:33.692710    5946 start.go:159] libmachine.API.Create for "default-k8s-diff-port-056000" (driver="qemu2")
	I0307 10:24:33.692761    5946 client.go:168] LocalClient.Create starting
	I0307 10:24:33.692873    5946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:24:33.692931    5946 main.go:141] libmachine: Decoding PEM data...
	I0307 10:24:33.692949    5946 main.go:141] libmachine: Parsing certificate...
	I0307 10:24:33.693006    5946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:24:33.693047    5946 main.go:141] libmachine: Decoding PEM data...
	I0307 10:24:33.693057    5946 main.go:141] libmachine: Parsing certificate...
	I0307 10:24:33.693560    5946 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:24:33.843247    5946 main.go:141] libmachine: Creating SSH key...
	I0307 10:24:34.130354    5946 main.go:141] libmachine: Creating Disk image...
	I0307 10:24:34.130366    5946 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:24:34.130575    5946 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/disk.qcow2
	I0307 10:24:34.143364    5946 main.go:141] libmachine: STDOUT: 
	I0307 10:24:34.143384    5946 main.go:141] libmachine: STDERR: 
	I0307 10:24:34.143434    5946 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/disk.qcow2 +20000M
	I0307 10:24:34.153878    5946 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:24:34.153893    5946 main.go:141] libmachine: STDERR: 
	I0307 10:24:34.153905    5946 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/disk.qcow2
	I0307 10:24:34.153912    5946 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:24:34.153950    5946 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:ac:85:31:af:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/disk.qcow2
	I0307 10:24:34.155545    5946 main.go:141] libmachine: STDOUT: 
	I0307 10:24:34.155559    5946 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:24:34.155573    5946 client.go:171] duration metric: took 462.820708ms to LocalClient.Create
	I0307 10:24:36.157664    5946 start.go:128] duration metric: took 2.523405083s to createHost
	I0307 10:24:36.157748    5946 start.go:83] releasing machines lock for "default-k8s-diff-port-056000", held for 2.523910708s
	W0307 10:24:36.158000    5946 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-056000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-056000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:36.171350    5946 out.go:177] 
	W0307 10:24:36.178586    5946 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:24:36.178612    5946 out.go:239] * 
	* 
	W0307 10:24:36.181207    5946 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:24:36.194490    5946 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-056000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000: exit status 7 (70.047666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-138000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-138000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (7.025521042s)

                                                
                                                
-- stdout --
	* [embed-certs-138000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-138000" primary control-plane node in "embed-certs-138000" cluster
	* Restarting existing qemu2 VM for "embed-certs-138000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-138000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:24:29.240389    5972 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:24:29.240518    5972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:29.240521    5972 out.go:304] Setting ErrFile to fd 2...
	I0307 10:24:29.240523    5972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:29.240643    5972 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:24:29.241612    5972 out.go:298] Setting JSON to false
	I0307 10:24:29.257510    5972 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5041,"bootTime":1709830828,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:24:29.257611    5972 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:24:29.261828    5972 out.go:177] * [embed-certs-138000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:24:29.268938    5972 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:24:29.272840    5972 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:24:29.268963    5972 notify.go:220] Checking for updates...
	I0307 10:24:29.278845    5972 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:24:29.281827    5972 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:24:29.284865    5972 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:24:29.287855    5972 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:24:29.291113    5972 config.go:182] Loaded profile config "embed-certs-138000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:24:29.291365    5972 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:24:29.295826    5972 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 10:24:29.302761    5972 start.go:297] selected driver: qemu2
	I0307 10:24:29.302767    5972 start.go:901] validating driver "qemu2" against &{Name:embed-certs-138000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:embed-certs-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:24:29.302842    5972 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:24:29.305111    5972 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:24:29.305160    5972 cni.go:84] Creating CNI manager for ""
	I0307 10:24:29.305168    5972 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:24:29.305212    5972 start.go:340] cluster config:
	{Name:embed-certs-138000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-138000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:24:29.309549    5972 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:29.316692    5972 out.go:177] * Starting "embed-certs-138000" primary control-plane node in "embed-certs-138000" cluster
	I0307 10:24:29.320805    5972 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:24:29.320819    5972 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:24:29.320829    5972 cache.go:56] Caching tarball of preloaded images
	I0307 10:24:29.320884    5972 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:24:29.320889    5972 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:24:29.320951    5972 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/embed-certs-138000/config.json ...
	I0307 10:24:29.321390    5972 start.go:360] acquireMachinesLock for embed-certs-138000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:24:29.321427    5972 start.go:364] duration metric: took 30.625µs to acquireMachinesLock for "embed-certs-138000"
	I0307 10:24:29.321436    5972 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:24:29.321442    5972 fix.go:54] fixHost starting: 
	I0307 10:24:29.321560    5972 fix.go:112] recreateIfNeeded on embed-certs-138000: state=Stopped err=<nil>
	W0307 10:24:29.321572    5972 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:24:29.325696    5972 out.go:177] * Restarting existing qemu2 VM for "embed-certs-138000" ...
	I0307 10:24:29.333822    5972 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:f7:00:9b:80:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/disk.qcow2
	I0307 10:24:29.335900    5972 main.go:141] libmachine: STDOUT: 
	I0307 10:24:29.335923    5972 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:24:29.335952    5972 fix.go:56] duration metric: took 14.509167ms for fixHost
	I0307 10:24:29.335958    5972 start.go:83] releasing machines lock for "embed-certs-138000", held for 14.52675ms
	W0307 10:24:29.335964    5972 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:24:29.335994    5972 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:29.336000    5972 start.go:728] Will try again in 5 seconds ...
	I0307 10:24:34.338007    5972 start.go:360] acquireMachinesLock for embed-certs-138000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:24:36.157878    5972 start.go:364] duration metric: took 1.819857209s to acquireMachinesLock for "embed-certs-138000"
	I0307 10:24:36.158016    5972 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:24:36.158031    5972 fix.go:54] fixHost starting: 
	I0307 10:24:36.158530    5972 fix.go:112] recreateIfNeeded on embed-certs-138000: state=Stopped err=<nil>
	W0307 10:24:36.158555    5972 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:24:36.175489    5972 out.go:177] * Restarting existing qemu2 VM for "embed-certs-138000" ...
	I0307 10:24:36.181634    5972 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:f7:00:9b:80:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/embed-certs-138000/disk.qcow2
	I0307 10:24:36.190450    5972 main.go:141] libmachine: STDOUT: 
	I0307 10:24:36.190522    5972 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:24:36.190609    5972 fix.go:56] duration metric: took 32.577792ms for fixHost
	I0307 10:24:36.190630    5972 start.go:83] releasing machines lock for "embed-certs-138000", held for 32.719167ms
	W0307 10:24:36.190832    5972 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-138000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-138000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:36.202558    5972 out.go:177] 
	W0307 10:24:36.210609    5972 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:24:36.210714    5972 out.go:239] * 
	* 
	W0307 10:24:36.213400    5972 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:24:36.225530    5972 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-138000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000: exit status 7 (61.270583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-138000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-056000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-056000 create -f testdata/busybox.yaml: exit status 1 (33.102125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-056000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-056000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000: exit status 7 (32.298792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-056000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000: exit status 7 (35.26575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-138000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000: exit status 7 (36.321541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-138000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-138000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-138000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-138000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.247542ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-138000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-138000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000: exit status 7 (33.443916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-138000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-056000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-056000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-056000 describe deploy/metrics-server -n kube-system: exit status 1 (28.839666ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-056000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-056000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000: exit status 7 (42.176041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-138000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000: exit status 7 (33.156209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-138000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-138000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-138000 --alsologtostderr -v=1: exit status 83 (52.359291ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-138000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-138000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:24:36.517176    6006 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:24:36.517375    6006 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:36.517378    6006 out.go:304] Setting ErrFile to fd 2...
	I0307 10:24:36.517380    6006 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:36.517516    6006 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:24:36.517762    6006 out.go:298] Setting JSON to false
	I0307 10:24:36.517770    6006 mustload.go:65] Loading cluster: embed-certs-138000
	I0307 10:24:36.517972    6006 config.go:182] Loaded profile config "embed-certs-138000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:24:36.524683    6006 out.go:177] * The control-plane node embed-certs-138000 host is not running: state=Stopped
	I0307 10:24:36.531577    6006 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-138000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-138000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000: exit status 7 (34.658458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-138000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000: exit status 7 (30.491625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-138000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-706000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-706000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (9.872666167s)

                                                
                                                
-- stdout --
	* [newest-cni-706000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-706000" primary control-plane node in "newest-cni-706000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-706000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:24:36.990271    6036 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:24:36.990390    6036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:36.990393    6036 out.go:304] Setting ErrFile to fd 2...
	I0307 10:24:36.990395    6036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:36.990527    6036 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:24:36.991638    6036 out.go:298] Setting JSON to false
	I0307 10:24:37.007732    6036 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5048,"bootTime":1709830828,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:24:37.007798    6036 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:24:37.012522    6036 out.go:177] * [newest-cni-706000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:24:37.019526    6036 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:24:37.019561    6036 notify.go:220] Checking for updates...
	I0307 10:24:37.023476    6036 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:24:37.027553    6036 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:24:37.029026    6036 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:24:37.032581    6036 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:24:37.035552    6036 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:24:37.038929    6036 config.go:182] Loaded profile config "default-k8s-diff-port-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:24:37.038996    6036 config.go:182] Loaded profile config "multinode-606000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:24:37.039047    6036 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:24:37.043457    6036 out.go:177] * Using the qemu2 driver based on user configuration
	I0307 10:24:37.050573    6036 start.go:297] selected driver: qemu2
	I0307 10:24:37.050580    6036 start.go:901] validating driver "qemu2" against <nil>
	I0307 10:24:37.050586    6036 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:24:37.052900    6036 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0307 10:24:37.052924    6036 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0307 10:24:37.061556    6036 out.go:177] * Automatically selected the socket_vmnet network
	I0307 10:24:37.064654    6036 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0307 10:24:37.064703    6036 cni.go:84] Creating CNI manager for ""
	I0307 10:24:37.064711    6036 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:24:37.064722    6036 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 10:24:37.064746    6036 start.go:340] cluster config:
	{Name:newest-cni-706000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-706000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:24:37.069526    6036 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:37.076568    6036 out.go:177] * Starting "newest-cni-706000" primary control-plane node in "newest-cni-706000" cluster
	I0307 10:24:37.080481    6036 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 10:24:37.080495    6036 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0307 10:24:37.080503    6036 cache.go:56] Caching tarball of preloaded images
	I0307 10:24:37.080553    6036 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:24:37.080558    6036 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0307 10:24:37.080621    6036 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/newest-cni-706000/config.json ...
	I0307 10:24:37.080633    6036 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/newest-cni-706000/config.json: {Name:mk5519307b9d1720eae1a7bf63a95a2c341dd0ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:24:37.080861    6036 start.go:360] acquireMachinesLock for newest-cni-706000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:24:37.080898    6036 start.go:364] duration metric: took 31.167µs to acquireMachinesLock for "newest-cni-706000"
	I0307 10:24:37.080910    6036 start.go:93] Provisioning new machine with config: &{Name:newest-cni-706000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-706000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:24:37.080958    6036 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:24:37.089536    6036 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 10:24:37.108123    6036 start.go:159] libmachine.API.Create for "newest-cni-706000" (driver="qemu2")
	I0307 10:24:37.108151    6036 client.go:168] LocalClient.Create starting
	I0307 10:24:37.108209    6036 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:24:37.108238    6036 main.go:141] libmachine: Decoding PEM data...
	I0307 10:24:37.108247    6036 main.go:141] libmachine: Parsing certificate...
	I0307 10:24:37.108299    6036 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:24:37.108321    6036 main.go:141] libmachine: Decoding PEM data...
	I0307 10:24:37.108329    6036 main.go:141] libmachine: Parsing certificate...
	I0307 10:24:37.108727    6036 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:24:37.244855    6036 main.go:141] libmachine: Creating SSH key...
	I0307 10:24:37.422442    6036 main.go:141] libmachine: Creating Disk image...
	I0307 10:24:37.422450    6036 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:24:37.422622    6036 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/disk.qcow2
	I0307 10:24:37.435764    6036 main.go:141] libmachine: STDOUT: 
	I0307 10:24:37.435788    6036 main.go:141] libmachine: STDERR: 
	I0307 10:24:37.435837    6036 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/disk.qcow2 +20000M
	I0307 10:24:37.446935    6036 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:24:37.446953    6036 main.go:141] libmachine: STDERR: 
	I0307 10:24:37.446967    6036 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/disk.qcow2
	I0307 10:24:37.446976    6036 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:24:37.447009    6036 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:47:cf:2f:0c:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/disk.qcow2
	I0307 10:24:37.448803    6036 main.go:141] libmachine: STDOUT: 
	I0307 10:24:37.448817    6036 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:24:37.448837    6036 client.go:171] duration metric: took 340.691167ms to LocalClient.Create
	I0307 10:24:39.451023    6036 start.go:128] duration metric: took 2.370120875s to createHost
	I0307 10:24:39.451094    6036 start.go:83] releasing machines lock for "newest-cni-706000", held for 2.370263333s
	W0307 10:24:39.451195    6036 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:39.467313    6036 out.go:177] * Deleting "newest-cni-706000" in qemu2 ...
	W0307 10:24:39.492385    6036 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:39.492416    6036 start.go:728] Will try again in 5 seconds ...
	I0307 10:24:44.494508    6036 start.go:360] acquireMachinesLock for newest-cni-706000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:24:44.494906    6036 start.go:364] duration metric: took 296.084µs to acquireMachinesLock for "newest-cni-706000"
	I0307 10:24:44.495035    6036 start.go:93] Provisioning new machine with config: &{Name:newest-cni-706000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-706000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:24:44.495311    6036 start.go:125] createHost starting for "" (driver="qemu2")
	I0307 10:24:44.501019    6036 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 10:24:44.552075    6036 start.go:159] libmachine.API.Create for "newest-cni-706000" (driver="qemu2")
	I0307 10:24:44.552117    6036 client.go:168] LocalClient.Create starting
	I0307 10:24:44.552210    6036 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/ca.pem
	I0307 10:24:44.552264    6036 main.go:141] libmachine: Decoding PEM data...
	I0307 10:24:44.552320    6036 main.go:141] libmachine: Parsing certificate...
	I0307 10:24:44.552378    6036 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18241-1349/.minikube/certs/cert.pem
	I0307 10:24:44.552419    6036 main.go:141] libmachine: Decoding PEM data...
	I0307 10:24:44.552429    6036 main.go:141] libmachine: Parsing certificate...
	I0307 10:24:44.552920    6036 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0307 10:24:44.701080    6036 main.go:141] libmachine: Creating SSH key...
	I0307 10:24:44.741770    6036 main.go:141] libmachine: Creating Disk image...
	I0307 10:24:44.741775    6036 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0307 10:24:44.741957    6036 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/disk.qcow2.raw /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/disk.qcow2
	I0307 10:24:44.754617    6036 main.go:141] libmachine: STDOUT: 
	I0307 10:24:44.754639    6036 main.go:141] libmachine: STDERR: 
	I0307 10:24:44.754701    6036 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/disk.qcow2 +20000M
	I0307 10:24:44.765380    6036 main.go:141] libmachine: STDOUT: Image resized.
	
	I0307 10:24:44.765396    6036 main.go:141] libmachine: STDERR: 
	I0307 10:24:44.765414    6036 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/disk.qcow2
	I0307 10:24:44.765418    6036 main.go:141] libmachine: Starting QEMU VM...
	I0307 10:24:44.765449    6036 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:73:f9:5a:73:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/disk.qcow2
	I0307 10:24:44.767131    6036 main.go:141] libmachine: STDOUT: 
	I0307 10:24:44.767146    6036 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:24:44.767163    6036 client.go:171] duration metric: took 215.048834ms to LocalClient.Create
	I0307 10:24:46.769250    6036 start.go:128] duration metric: took 2.273968125s to createHost
	I0307 10:24:46.769301    6036 start.go:83] releasing machines lock for "newest-cni-706000", held for 2.274447041s
	W0307 10:24:46.769624    6036 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-706000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-706000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:46.784242    6036 out.go:177] 
	W0307 10:24:46.788180    6036 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:24:46.788211    6036 out.go:239] * 
	* 
	W0307 10:24:46.790741    6036 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:24:46.808234    6036 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-706000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-706000 -n newest-cni-706000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-706000 -n newest-cni-706000: exit status 7 (71.983167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-706000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-056000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-056000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (6.4922405s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-056000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-056000" primary control-plane node in "default-k8s-diff-port-056000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-056000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-056000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:24:40.387060    6065 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:24:40.387176    6065 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:40.387180    6065 out.go:304] Setting ErrFile to fd 2...
	I0307 10:24:40.387182    6065 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:40.387298    6065 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:24:40.388336    6065 out.go:298] Setting JSON to false
	I0307 10:24:40.404200    6065 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5052,"bootTime":1709830828,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:24:40.404295    6065 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:24:40.409376    6065 out.go:177] * [default-k8s-diff-port-056000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:24:40.416404    6065 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:24:40.416467    6065 notify.go:220] Checking for updates...
	I0307 10:24:40.420418    6065 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:24:40.423437    6065 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:24:40.430329    6065 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:24:40.433401    6065 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:24:40.436358    6065 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:24:40.439620    6065 config.go:182] Loaded profile config "default-k8s-diff-port-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:24:40.439887    6065 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:24:40.444367    6065 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 10:24:40.451352    6065 start.go:297] selected driver: qemu2
	I0307 10:24:40.451359    6065 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-056000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:24:40.451438    6065 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:24:40.453729    6065 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:24:40.453779    6065 cni.go:84] Creating CNI manager for ""
	I0307 10:24:40.453786    6065 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:24:40.453814    6065 start.go:340] cluster config:
	{Name:default-k8s-diff-port-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-056000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:24:40.458059    6065 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:40.466429    6065 out.go:177] * Starting "default-k8s-diff-port-056000" primary control-plane node in "default-k8s-diff-port-056000" cluster
	I0307 10:24:40.470421    6065 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 10:24:40.470433    6065 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 10:24:40.470457    6065 cache.go:56] Caching tarball of preloaded images
	I0307 10:24:40.470501    6065 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:24:40.470506    6065 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 10:24:40.470560    6065 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/default-k8s-diff-port-056000/config.json ...
	I0307 10:24:40.471051    6065 start.go:360] acquireMachinesLock for default-k8s-diff-port-056000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:24:40.471083    6065 start.go:364] duration metric: took 26µs to acquireMachinesLock for "default-k8s-diff-port-056000"
	I0307 10:24:40.471091    6065 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:24:40.471097    6065 fix.go:54] fixHost starting: 
	I0307 10:24:40.471217    6065 fix.go:112] recreateIfNeeded on default-k8s-diff-port-056000: state=Stopped err=<nil>
	W0307 10:24:40.471227    6065 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:24:40.475306    6065 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-056000" ...
	I0307 10:24:40.482421    6065 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:ac:85:31:af:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/disk.qcow2
	I0307 10:24:40.484508    6065 main.go:141] libmachine: STDOUT: 
	I0307 10:24:40.484527    6065 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:24:40.484555    6065 fix.go:56] duration metric: took 13.459542ms for fixHost
	I0307 10:24:40.484559    6065 start.go:83] releasing machines lock for "default-k8s-diff-port-056000", held for 13.472042ms
	W0307 10:24:40.484565    6065 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:24:40.484596    6065 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:40.484602    6065 start.go:728] Will try again in 5 seconds ...
	I0307 10:24:45.486587    6065 start.go:360] acquireMachinesLock for default-k8s-diff-port-056000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:24:46.769465    6065 start.go:364] duration metric: took 1.282793792s to acquireMachinesLock for "default-k8s-diff-port-056000"
	I0307 10:24:46.769637    6065 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:24:46.769652    6065 fix.go:54] fixHost starting: 
	I0307 10:24:46.770314    6065 fix.go:112] recreateIfNeeded on default-k8s-diff-port-056000: state=Stopped err=<nil>
	W0307 10:24:46.770344    6065 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:24:46.784242    6065 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-056000" ...
	I0307 10:24:46.796465    6065 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:ac:85:31:af:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/default-k8s-diff-port-056000/disk.qcow2
	I0307 10:24:46.806630    6065 main.go:141] libmachine: STDOUT: 
	I0307 10:24:46.806696    6065 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:24:46.806787    6065 fix.go:56] duration metric: took 37.136917ms for fixHost
	I0307 10:24:46.806803    6065 start.go:83] releasing machines lock for "default-k8s-diff-port-056000", held for 37.292167ms
	W0307 10:24:46.806995    6065 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-056000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-056000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:46.819095    6065 out.go:177] 
	W0307 10:24:46.823364    6065 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:24:46.823427    6065 out.go:239] * 
	* 
	W0307 10:24:46.826133    6065 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:24:46.836254    6065 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-056000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000: exit status 7 (62.601625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-056000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000: exit status 7 (41.515791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-056000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-056000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-056000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.819708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-056000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-056000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000: exit status 7 (38.776125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-056000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000: exit status 7 (31.164584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-056000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-056000 --alsologtostderr -v=1: exit status 83 (43.5125ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-056000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-056000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:24:47.120774    6096 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:24:47.120935    6096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:47.120938    6096 out.go:304] Setting ErrFile to fd 2...
	I0307 10:24:47.120941    6096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:47.121071    6096 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:24:47.121298    6096 out.go:298] Setting JSON to false
	I0307 10:24:47.121307    6096 mustload.go:65] Loading cluster: default-k8s-diff-port-056000
	I0307 10:24:47.121504    6096 config.go:182] Loaded profile config "default-k8s-diff-port-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 10:24:47.126313    6096 out.go:177] * The control-plane node default-k8s-diff-port-056000 host is not running: state=Stopped
	I0307 10:24:47.130165    6096 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-056000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-056000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000: exit status 7 (30.885583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-056000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000: exit status 7 (31.202917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-706000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-706000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.184578125s)

                                                
                                                
-- stdout --
	* [newest-cni-706000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-706000" primary control-plane node in "newest-cni-706000" cluster
	* Restarting existing qemu2 VM for "newest-cni-706000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-706000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:24:50.145069    6137 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:24:50.145217    6137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:50.145220    6137 out.go:304] Setting ErrFile to fd 2...
	I0307 10:24:50.145222    6137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:50.145360    6137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:24:50.146341    6137 out.go:298] Setting JSON to false
	I0307 10:24:50.162294    6137 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5062,"bootTime":1709830828,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 10:24:50.162362    6137 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 10:24:50.167718    6137 out.go:177] * [newest-cni-706000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 10:24:50.175793    6137 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 10:24:50.179763    6137 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 10:24:50.175831    6137 notify.go:220] Checking for updates...
	I0307 10:24:50.182830    6137 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 10:24:50.185739    6137 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:24:50.188808    6137 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 10:24:50.191845    6137 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:24:50.195064    6137 config.go:182] Loaded profile config "newest-cni-706000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0307 10:24:50.195333    6137 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 10:24:50.199769    6137 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 10:24:50.206685    6137 start.go:297] selected driver: qemu2
	I0307 10:24:50.206693    6137 start.go:901] validating driver "qemu2" against &{Name:newest-cni-706000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-706000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:24:50.206758    6137 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:24:50.209065    6137 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0307 10:24:50.209113    6137 cni.go:84] Creating CNI manager for ""
	I0307 10:24:50.209120    6137 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:24:50.209150    6137 start.go:340] cluster config:
	{Name:newest-cni-706000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-706000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 10:24:50.213471    6137 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:24:50.220759    6137 out.go:177] * Starting "newest-cni-706000" primary control-plane node in "newest-cni-706000" cluster
	I0307 10:24:50.224714    6137 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 10:24:50.224727    6137 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0307 10:24:50.224736    6137 cache.go:56] Caching tarball of preloaded images
	I0307 10:24:50.224784    6137 preload.go:173] Found /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 10:24:50.224790    6137 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0307 10:24:50.224845    6137 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/newest-cni-706000/config.json ...
	I0307 10:24:50.225349    6137 start.go:360] acquireMachinesLock for newest-cni-706000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:24:50.225386    6137 start.go:364] duration metric: took 30.625µs to acquireMachinesLock for "newest-cni-706000"
	I0307 10:24:50.225394    6137 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:24:50.225400    6137 fix.go:54] fixHost starting: 
	I0307 10:24:50.225512    6137 fix.go:112] recreateIfNeeded on newest-cni-706000: state=Stopped err=<nil>
	W0307 10:24:50.225520    6137 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:24:50.227123    6137 out.go:177] * Restarting existing qemu2 VM for "newest-cni-706000" ...
	I0307 10:24:50.234806    6137 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:73:f9:5a:73:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/disk.qcow2
	I0307 10:24:50.236802    6137 main.go:141] libmachine: STDOUT: 
	I0307 10:24:50.236826    6137 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:24:50.236855    6137 fix.go:56] duration metric: took 11.455584ms for fixHost
	I0307 10:24:50.236860    6137 start.go:83] releasing machines lock for "newest-cni-706000", held for 11.470041ms
	W0307 10:24:50.236867    6137 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:24:50.236903    6137 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:50.236908    6137 start.go:728] Will try again in 5 seconds ...
	I0307 10:24:55.238902    6137 start.go:360] acquireMachinesLock for newest-cni-706000: {Name:mk67f95fc11e0179c933cc58842f09aef5647963 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:24:55.239276    6137 start.go:364] duration metric: took 264.667µs to acquireMachinesLock for "newest-cni-706000"
	I0307 10:24:55.239382    6137 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:24:55.239396    6137 fix.go:54] fixHost starting: 
	I0307 10:24:55.239830    6137 fix.go:112] recreateIfNeeded on newest-cni-706000: state=Stopped err=<nil>
	W0307 10:24:55.239846    6137 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 10:24:55.248279    6137 out.go:177] * Restarting existing qemu2 VM for "newest-cni-706000" ...
	I0307 10:24:55.253402    6137 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:73:f9:5a:73:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18241-1349/.minikube/machines/newest-cni-706000/disk.qcow2
	I0307 10:24:55.263025    6137 main.go:141] libmachine: STDOUT: 
	I0307 10:24:55.263116    6137 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0307 10:24:55.263248    6137 fix.go:56] duration metric: took 23.849291ms for fixHost
	I0307 10:24:55.263274    6137 start.go:83] releasing machines lock for "newest-cni-706000", held for 23.977792ms
	W0307 10:24:55.263558    6137 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-706000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-706000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0307 10:24:55.270076    6137 out.go:177] 
	W0307 10:24:55.274192    6137 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0307 10:24:55.274219    6137 out.go:239] * 
	* 
	W0307 10:24:55.276882    6137 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:24:55.285152    6137 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-706000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-706000 -n newest-cni-706000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-706000 -n newest-cni-706000: exit status 7 (69.250916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-706000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-706000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-706000 -n newest-cni-706000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-706000 -n newest-cni-706000: exit status 7 (31.731125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-706000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-706000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-706000 --alsologtostderr -v=1: exit status 83 (42.712625ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-706000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-706000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:24:55.473661    6154 out.go:291] Setting OutFile to fd 1 ...
	I0307 10:24:55.473787    6154 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:55.473794    6154 out.go:304] Setting ErrFile to fd 2...
	I0307 10:24:55.473796    6154 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 10:24:55.473905    6154 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 10:24:55.474110    6154 out.go:298] Setting JSON to false
	I0307 10:24:55.474120    6154 mustload.go:65] Loading cluster: newest-cni-706000
	I0307 10:24:55.474301    6154 config.go:182] Loaded profile config "newest-cni-706000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0307 10:24:55.478008    6154 out.go:177] * The control-plane node newest-cni-706000 host is not running: state=Stopped
	I0307 10:24:55.481997    6154 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-706000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-706000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-706000 -n newest-cni-706000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-706000 -n newest-cni-706000: exit status 7 (31.644292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-706000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-706000 -n newest-cni-706000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-706000 -n newest-cni-706000: exit status 7 (32.01625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-706000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (160/281)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.28.4/json-events 34.04
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.24
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.23
21 TestDownloadOnly/v1.29.0-rc.2/json-events 29.7
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.23
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.23
30 TestBinaryMirror 0.37
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 211.32
38 TestAddons/parallel/Registry 18.17
40 TestAddons/parallel/InspektorGadget 10.23
41 TestAddons/parallel/MetricsServer 5.25
44 TestAddons/parallel/CSI 45.71
45 TestAddons/parallel/Headlamp 12.54
46 TestAddons/parallel/CloudSpanner 5.16
47 TestAddons/parallel/LocalPath 51.76
48 TestAddons/parallel/NvidiaDevicePlugin 5.14
49 TestAddons/parallel/Yakd 5
52 TestAddons/serial/GCPAuth/Namespaces 0.14
53 TestAddons/StoppedEnableDisable 12.4
61 TestHyperKitDriverInstallOrUpdate 9.6
64 TestErrorSpam/setup 33.11
65 TestErrorSpam/start 0.34
66 TestErrorSpam/status 0.24
67 TestErrorSpam/pause 0.72
68 TestErrorSpam/unpause 0.63
69 TestErrorSpam/stop 55.26
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 47.88
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 35.24
76 TestFunctional/serial/KubeContext 0.03
77 TestFunctional/serial/KubectlGetPods 0.05
80 TestFunctional/serial/CacheCmd/cache/add_remote 9.53
81 TestFunctional/serial/CacheCmd/cache/add_local 1.23
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
83 TestFunctional/serial/CacheCmd/cache/list 0.04
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.15
86 TestFunctional/serial/CacheCmd/cache/delete 0.07
87 TestFunctional/serial/MinikubeKubectlCmd 0.53
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.65
89 TestFunctional/serial/ExtraConfig 34.02
90 TestFunctional/serial/ComponentHealth 0.04
91 TestFunctional/serial/LogsCmd 0.7
92 TestFunctional/serial/LogsFileCmd 0.65
93 TestFunctional/serial/InvalidService 3.92
95 TestFunctional/parallel/ConfigCmd 0.24
96 TestFunctional/parallel/DashboardCmd 7.67
97 TestFunctional/parallel/DryRun 0.24
98 TestFunctional/parallel/InternationalLanguage 0.12
99 TestFunctional/parallel/StatusCmd 0.25
104 TestFunctional/parallel/AddonsCmd 0.13
105 TestFunctional/parallel/PersistentVolumeClaim 24.74
107 TestFunctional/parallel/SSHCmd 0.14
108 TestFunctional/parallel/CpCmd 0.42
110 TestFunctional/parallel/FileSync 0.07
111 TestFunctional/parallel/CertSync 0.41
115 TestFunctional/parallel/NodeLabels 0.04
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.06
119 TestFunctional/parallel/License 1.23
120 TestFunctional/parallel/Version/short 0.04
121 TestFunctional/parallel/Version/components 0.16
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
126 TestFunctional/parallel/ImageCommands/ImageBuild 5.92
127 TestFunctional/parallel/ImageCommands/Setup 5.26
128 TestFunctional/parallel/DockerEnv/bash 0.43
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
132 TestFunctional/parallel/ServiceCmd/DeployApp 13.09
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.16
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.53
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.31
136 TestFunctional/parallel/ServiceCmd/List 0.09
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
139 TestFunctional/parallel/ServiceCmd/Format 0.1
140 TestFunctional/parallel/ServiceCmd/URL 0.1
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.73
143 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.25
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
150 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
151 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
152 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
154 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
155 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
156 TestFunctional/parallel/ProfileCmd/profile_not_create 0.18
157 TestFunctional/parallel/ProfileCmd/profile_list 0.15
158 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
159 TestFunctional/parallel/MountCmd/any-port 9.35
160 TestFunctional/parallel/MountCmd/specific-port 1.05
161 TestFunctional/parallel/MountCmd/VerifyCleanup 1.5
162 TestFunctional/delete_addon-resizer_images 0.15
163 TestFunctional/delete_my-image_image 0.04
164 TestFunctional/delete_minikube_cached_images 0.04
168 TestMutliControlPlane/serial/StartCluster 302.95
169 TestMutliControlPlane/serial/DeployApp 9.22
170 TestMutliControlPlane/serial/PingHostFromPods 0.79
171 TestMutliControlPlane/serial/AddWorkerNode 51.25
172 TestMutliControlPlane/serial/NodeLabels 0.15
173 TestMutliControlPlane/serial/HAppyAfterClusterStart 0.26
174 TestMutliControlPlane/serial/CopyFile 4.39
178 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 79.98
186 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.07
193 TestJSONOutput/start/Audit 0
195 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Audit 0
201 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Audit 0
207 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/stop/Command 1.75
211 TestJSONOutput/stop/Audit 0
213 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
215 TestErrorJSONOutput 0.33
220 TestMainNoArgs 0.03
267 TestStoppedBinaryUpgrade/Setup 4.94
279 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
283 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
284 TestNoKubernetes/serial/ProfileList 31.34
285 TestNoKubernetes/serial/Stop 3.31
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
295 TestStoppedBinaryUpgrade/MinikubeLogs 0.8
304 TestStartStop/group/old-k8s-version/serial/Stop 3.51
305 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
313 TestStartStop/group/no-preload/serial/Stop 3.6
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
324 TestStartStop/group/embed-certs/serial/Stop 3.5
327 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.14
335 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.7
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
344 TestStartStop/group/newest-cni/serial/Stop 3.01
347 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-996000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-996000: exit status 85 (90.6195ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-996000 | jenkins | v1.32.0 | 07 Mar 24 09:28 PST |          |
	|         | -p download-only-996000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 09:28:56
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 09:28:56.448663    1783 out.go:291] Setting OutFile to fd 1 ...
	I0307 09:28:56.448798    1783 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:28:56.448801    1783 out.go:304] Setting ErrFile to fd 2...
	I0307 09:28:56.448803    1783 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:28:56.448934    1783 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	W0307 09:28:56.449023    1783 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18241-1349/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18241-1349/.minikube/config/config.json: no such file or directory
	I0307 09:28:56.450235    1783 out.go:298] Setting JSON to true
	I0307 09:28:56.467946    1783 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1708,"bootTime":1709830828,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 09:28:56.468004    1783 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 09:28:56.473248    1783 out.go:97] [download-only-996000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 09:28:56.477122    1783 out.go:169] MINIKUBE_LOCATION=18241
	I0307 09:28:56.473391    1783 notify.go:220] Checking for updates...
	W0307 09:28:56.473421    1783 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball: no such file or directory
	I0307 09:28:56.484992    1783 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 09:28:56.489264    1783 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 09:28:56.496021    1783 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 09:28:56.499170    1783 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	W0307 09:28:56.505134    1783 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 09:28:56.505331    1783 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 09:28:56.509122    1783 out.go:97] Using the qemu2 driver based on user configuration
	I0307 09:28:56.509144    1783 start.go:297] selected driver: qemu2
	I0307 09:28:56.509161    1783 start.go:901] validating driver "qemu2" against <nil>
	I0307 09:28:56.509236    1783 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 09:28:56.512188    1783 out.go:169] Automatically selected the socket_vmnet network
	I0307 09:28:56.518111    1783 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0307 09:28:56.518211    1783 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 09:28:56.518312    1783 cni.go:84] Creating CNI manager for ""
	I0307 09:28:56.518330    1783 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0307 09:28:56.518382    1783 start.go:340] cluster config:
	{Name:download-only-996000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-996000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 09:28:56.522875    1783 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 09:28:56.527151    1783 out.go:97] Downloading VM boot image ...
	I0307 09:28:56.527172    1783 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso
	I0307 09:29:14.599321    1783 out.go:97] Starting "download-only-996000" primary control-plane node in "download-only-996000" cluster
	I0307 09:29:14.599367    1783 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 09:29:14.871643    1783 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 09:29:14.871762    1783 cache.go:56] Caching tarball of preloaded images
	I0307 09:29:14.872537    1783 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 09:29:14.878059    1783 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0307 09:29:14.878118    1783 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0307 09:29:15.466596    1783 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0307 09:29:34.639957    1783 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0307 09:29:34.640139    1783 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0307 09:29:35.366876    1783 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0307 09:29:35.367062    1783 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/download-only-996000/config.json ...
	I0307 09:29:35.367081    1783 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/download-only-996000/config.json: {Name:mk96b11f02051e864ff39bad632d46a942eba181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 09:29:35.367324    1783 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 09:29:35.367501    1783 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0307 09:29:36.073727    1783 out.go:169] 
	W0307 09:29:36.077780    1783 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18241-1349/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10686f0a0 0x10686f0a0 0x10686f0a0 0x10686f0a0 0x10686f0a0 0x10686f0a0 0x10686f0a0] Decompressors:map[bz2:0x140004a93b0 gz:0x140004a93b8 tar:0x140004a9360 tar.bz2:0x140004a9370 tar.gz:0x140004a9380 tar.xz:0x140004a9390 tar.zst:0x140004a93a0 tbz2:0x140004a9370 tgz:0x140004a9380 txz:0x140004a9390 tzst:0x140004a93a0 xz:0x140004a93c0 zip:0x140004a93d0 zst:0x140004a93c8] Getters:map[file:0x14002602620 http:0x14000178960 https:0x140001789b0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0307 09:29:36.077813    1783 out_reason.go:110] 
	W0307 09:29:36.084749    1783 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 09:29:36.088747    1783 out.go:169] 
	
	
	* The control-plane node download-only-996000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-996000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-996000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (34.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-307000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-307000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 : (34.0443525s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (34.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-307000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-307000: exit status 85 (81.30125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-996000 | jenkins | v1.32.0 | 07 Mar 24 09:28 PST |                     |
	|         | -p download-only-996000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 07 Mar 24 09:29 PST | 07 Mar 24 09:29 PST |
	| delete  | -p download-only-996000        | download-only-996000 | jenkins | v1.32.0 | 07 Mar 24 09:29 PST | 07 Mar 24 09:29 PST |
	| start   | -o=json --download-only        | download-only-307000 | jenkins | v1.32.0 | 07 Mar 24 09:29 PST |                     |
	|         | -p download-only-307000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 09:29:36
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 09:29:36.765576    1833 out.go:291] Setting OutFile to fd 1 ...
	I0307 09:29:36.765713    1833 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:29:36.765716    1833 out.go:304] Setting ErrFile to fd 2...
	I0307 09:29:36.765719    1833 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:29:36.765915    1833 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 09:29:36.767307    1833 out.go:298] Setting JSON to true
	I0307 09:29:36.783641    1833 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1748,"bootTime":1709830828,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 09:29:36.783711    1833 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 09:29:36.788317    1833 out.go:97] [download-only-307000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 09:29:36.792183    1833 out.go:169] MINIKUBE_LOCATION=18241
	I0307 09:29:36.788420    1833 notify.go:220] Checking for updates...
	I0307 09:29:36.799215    1833 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 09:29:36.802257    1833 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 09:29:36.805289    1833 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 09:29:36.808286    1833 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	W0307 09:29:36.814267    1833 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 09:29:36.814439    1833 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 09:29:36.817249    1833 out.go:97] Using the qemu2 driver based on user configuration
	I0307 09:29:36.817264    1833 start.go:297] selected driver: qemu2
	I0307 09:29:36.817268    1833 start.go:901] validating driver "qemu2" against <nil>
	I0307 09:29:36.817318    1833 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 09:29:36.820247    1833 out.go:169] Automatically selected the socket_vmnet network
	I0307 09:29:36.825415    1833 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0307 09:29:36.825512    1833 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 09:29:36.825551    1833 cni.go:84] Creating CNI manager for ""
	I0307 09:29:36.825560    1833 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 09:29:36.825567    1833 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 09:29:36.825605    1833 start.go:340] cluster config:
	{Name:download-only-307000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-307000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 09:29:36.830040    1833 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 09:29:36.833272    1833 out.go:97] Starting "download-only-307000" primary control-plane node in "download-only-307000" cluster
	I0307 09:29:36.833282    1833 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 09:29:37.499974    1833 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 09:29:37.500038    1833 cache.go:56] Caching tarball of preloaded images
	I0307 09:29:37.500831    1833 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 09:29:37.506348    1833 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0307 09:29:37.506375    1833 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0307 09:29:38.105198    1833 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0307 09:29:56.953710    1833 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0307 09:29:56.953865    1833 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0307 09:29:57.547760    1833 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 09:29:57.547956    1833 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/download-only-307000/config.json ...
	I0307 09:29:57.547976    1833 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/download-only-307000/config.json: {Name:mk5a5ddfafb39a1bf7ab4321a606fb894714a239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 09:29:57.548237    1833 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 09:29:57.548362    1833 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/darwin/arm64/v1.28.4/kubectl
	
	
	* The control-plane node download-only-307000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-307000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-307000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (29.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-861000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-861000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 : (29.696072166s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (29.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-861000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-861000: exit status 85 (83.754042ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-996000 | jenkins | v1.32.0 | 07 Mar 24 09:28 PST |                     |
	|         | -p download-only-996000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 07 Mar 24 09:29 PST | 07 Mar 24 09:29 PST |
	| delete  | -p download-only-996000           | download-only-996000 | jenkins | v1.32.0 | 07 Mar 24 09:29 PST | 07 Mar 24 09:29 PST |
	| start   | -o=json --download-only           | download-only-307000 | jenkins | v1.32.0 | 07 Mar 24 09:29 PST |                     |
	|         | -p download-only-307000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 07 Mar 24 09:30 PST | 07 Mar 24 09:30 PST |
	| delete  | -p download-only-307000           | download-only-307000 | jenkins | v1.32.0 | 07 Mar 24 09:30 PST | 07 Mar 24 09:30 PST |
	| start   | -o=json --download-only           | download-only-861000 | jenkins | v1.32.0 | 07 Mar 24 09:30 PST |                     |
	|         | -p download-only-861000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 09:30:11
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 09:30:11.361954    1883 out.go:291] Setting OutFile to fd 1 ...
	I0307 09:30:11.362093    1883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:30:11.362096    1883 out.go:304] Setting ErrFile to fd 2...
	I0307 09:30:11.362098    1883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:30:11.362230    1883 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 09:30:11.363362    1883 out.go:298] Setting JSON to true
	I0307 09:30:11.380296    1883 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1783,"bootTime":1709830828,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 09:30:11.380360    1883 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 09:30:11.384933    1883 out.go:97] [download-only-861000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 09:30:11.388850    1883 out.go:169] MINIKUBE_LOCATION=18241
	I0307 09:30:11.385066    1883 notify.go:220] Checking for updates...
	I0307 09:30:11.395874    1883 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 09:30:11.399040    1883 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 09:30:11.401873    1883 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 09:30:11.404873    1883 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	W0307 09:30:11.409392    1883 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 09:30:11.409552    1883 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 09:30:11.412882    1883 out.go:97] Using the qemu2 driver based on user configuration
	I0307 09:30:11.412891    1883 start.go:297] selected driver: qemu2
	I0307 09:30:11.412895    1883 start.go:901] validating driver "qemu2" against <nil>
	I0307 09:30:11.412943    1883 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 09:30:11.415856    1883 out.go:169] Automatically selected the socket_vmnet network
	I0307 09:30:11.421458    1883 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0307 09:30:11.421560    1883 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 09:30:11.421595    1883 cni.go:84] Creating CNI manager for ""
	I0307 09:30:11.421607    1883 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 09:30:11.421618    1883 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 09:30:11.421653    1883 start.go:340] cluster config:
	{Name:download-only-861000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-861000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 09:30:11.426333    1883 iso.go:125] acquiring lock: {Name:mkc8e88609b02a27c09182960b53cd728f6e9532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 09:30:11.428885    1883 out.go:97] Starting "download-only-861000" primary control-plane node in "download-only-861000" cluster
	I0307 09:30:11.428898    1883 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 09:30:12.104737    1883 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0307 09:30:12.104804    1883 cache.go:56] Caching tarball of preloaded images
	I0307 09:30:12.105534    1883 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 09:30:12.111071    1883 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0307 09:30:12.111106    1883 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0307 09:30:12.697445    1883 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4?checksum=md5:ec278d0a65e5e64ee0e67f51e14b1867 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0307 09:30:29.343208    1883 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0307 09:30:29.343334    1883 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0307 09:30:29.908664    1883 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0307 09:30:29.908916    1883 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/download-only-861000/config.json ...
	I0307 09:30:29.908936    1883 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/download-only-861000/config.json: {Name:mk151f06ff26569e89b31200aa3fca520ca0da2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 09:30:29.909194    1883 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 09:30:29.909319    1883 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18241-1349/.minikube/cache/darwin/arm64/v1.29.0-rc.2/kubectl
	
	
	* The control-plane node download-only-861000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-861000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-861000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.37s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-371000 --alsologtostderr --binary-mirror http://127.0.0.1:49328 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-371000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-371000
--- PASS: TestBinaryMirror (0.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-040000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-040000: exit status 85 (58.206958ms)

                                                
                                                
-- stdout --
	* Profile "addons-040000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-040000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-040000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-040000: exit status 85 (61.684417ms)

                                                
                                                
-- stdout --
	* Profile "addons-040000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-040000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (211.32s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-040000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-darwin-arm64 start -p addons-040000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m31.3154955s)
--- PASS: TestAddons/Setup (211.32s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 7.330709ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-4jsfg" [1625931b-1b36-4877-9d0e-5a1ec025d3a7] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004164041s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-t4p6j" [064ff01e-79aa-4a33-bdd3-077357314bb1] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00511975s
addons_test.go:340: (dbg) Run:  kubectl --context addons-040000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-040000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-040000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.803520667s)
addons_test.go:359: (dbg) Run:  out/minikube-darwin-arm64 -p addons-040000 ip
2024/03/07 09:34:31 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-darwin-arm64 -p addons-040000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.17s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-d6qd4" [ef54580f-3a7c-44be-a4bc-3bd432eefc16] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004389375s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-040000
addons_test.go:841: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-040000: (5.225397334s)
--- PASS: TestAddons/parallel/InspektorGadget (10.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 12.46525ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-vl44c" [a61e5c1a-873a-4244-9696-dc91d18176c4] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003853625s
addons_test.go:415: (dbg) Run:  kubectl --context addons-040000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-arm64 -p addons-040000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 7.483ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-040000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-040000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [85c5da79-4a71-44d6-a15c-f9a5dbc8b8fd] Pending
helpers_test.go:344: "task-pv-pod" [85c5da79-4a71-44d6-a15c-f9a5dbc8b8fd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [85c5da79-4a71-44d6-a15c-f9a5dbc8b8fd] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.003722459s
addons_test.go:584: (dbg) Run:  kubectl --context addons-040000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-040000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-040000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-040000 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-040000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-040000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-040000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [48bfd56d-d3f3-42db-bfe4-7d5a6691f344] Pending
helpers_test.go:344: "task-pv-pod-restore" [48bfd56d-d3f3-42db-bfe4-7d5a6691f344] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [48bfd56d-d3f3-42db-bfe4-7d5a6691f344] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004118041s
addons_test.go:626: (dbg) Run:  kubectl --context addons-040000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-040000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-040000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-arm64 -p addons-040000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-arm64 -p addons-040000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.117782583s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-arm64 -p addons-040000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (45.71s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-040000 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-fqnfz" [a07f9b87-f66d-4e04-a6ff-6116fdc2d2a5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-fqnfz" [a07f9b87-f66d-4e04-a6ff-6116fdc2d2a5] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003918542s
--- PASS: TestAddons/parallel/Headlamp (12.54s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-mlnhn" [b9dce7cd-cadc-40f0-8e36-4d33886d4f02] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00398125s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-040000
--- PASS: TestAddons/parallel/CloudSpanner (5.16s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.76s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-040000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-040000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1907aac4-6f9d-4cf5-8255-75931fce2d14] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1907aac4-6f9d-4cf5-8255-75931fce2d14] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1907aac4-6f9d-4cf5-8255-75931fce2d14] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004350375s
addons_test.go:891: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-arm64 -p addons-040000 ssh "cat /opt/local-path-provisioner/pvc-c4f2b98e-ba12-40bb-bdca-a7b767b70ab6_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-040000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-040000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-arm64 -p addons-040000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-arm64 -p addons-040000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.299872042s)
--- PASS: TestAddons/parallel/LocalPath (51.76s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.14s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-pxhm2" [a5b807d3-67be-4a1d-baaf-1e0d7aac4caa] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004094917s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-040000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.14s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-944jt" [a84c753c-226e-44e5-ac2a-2eabf7cd9121] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004045166s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-040000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-040000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-040000
addons_test.go:172: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-040000: (12.207693708s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-040000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-040000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-040000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.6s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.60s)

                                                
                                    
x
+
TestErrorSpam/setup (33.11s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-531000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-531000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 --driver=qemu2 : (33.1075665s)
--- PASS: TestErrorSpam/setup (33.11s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 pause
--- PASS: TestErrorSpam/pause (0.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 unpause
--- PASS: TestErrorSpam/unpause (0.63s)

                                                
                                    
x
+
TestErrorSpam/stop (55.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 stop: (3.195883416s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 stop: (26.038204667s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-531000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-531000 stop: (26.025668208s)
--- PASS: TestErrorSpam/stop (55.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18241-1349/.minikube/files/etc/test/nested/copy/1781/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.88s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-618000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-618000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (47.878466792s)
--- PASS: TestFunctional/serial/StartWithProxy (47.88s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-618000 --alsologtostderr -v=8
E0307 09:39:13.866568    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
E0307 09:39:13.873422    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
E0307 09:39:13.883558    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
E0307 09:39:13.905619    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
E0307 09:39:13.947653    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
E0307 09:39:14.029693    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
E0307 09:39:14.191781    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
E0307 09:39:14.513893    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
E0307 09:39:15.155998    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
E0307 09:39:16.438053    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
E0307 09:39:19.000133    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-618000 --alsologtostderr -v=8: (35.235569s)
functional_test.go:659: soft start took 35.235980333s for "functional-618000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-618000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 cache add registry.k8s.io/pause:3.1
E0307 09:39:24.122537    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-618000 cache add registry.k8s.io/pause:3.1: (4.010308208s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-618000 cache add registry.k8s.io/pause:3.3: (3.308274208s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-618000 cache add registry.k8s.io/pause:latest: (2.206856625s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-618000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2092454143/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 cache add minikube-local-cache-test:functional-618000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 cache delete minikube-local-cache-test:functional-618000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-618000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-618000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (75.853667ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-arm64 -p functional-618000 cache reload: (1.9162005s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 kubectl -- --context functional-618000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-618000 get pods
E0307 09:39:34.364453    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-618000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0307 09:39:54.846021    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-618000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.015187334s)
functional_test.go:757: restart took 34.015302333s for "functional-618000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-618000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.70s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd2477985991/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.92s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-618000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-618000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-618000: exit status 115 (106.058125ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30807 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-618000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-618000 config get cpus: exit status 14 (35.850875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-618000 config get cpus: exit status 14 (34.470792ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-618000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-618000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2751: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.67s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-618000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-618000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (121.887958ms)

                                                
                                                
-- stdout --
	* [functional-618000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 09:41:10.080528    2727 out.go:291] Setting OutFile to fd 1 ...
	I0307 09:41:10.080691    2727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:41:10.080695    2727 out.go:304] Setting ErrFile to fd 2...
	I0307 09:41:10.080697    2727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:41:10.080843    2727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 09:41:10.081932    2727 out.go:298] Setting JSON to false
	I0307 09:41:10.100720    2727 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2442,"bootTime":1709830828,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 09:41:10.100817    2727 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 09:41:10.106144    2727 out.go:177] * [functional-618000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0307 09:41:10.115107    2727 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 09:41:10.115163    2727 notify.go:220] Checking for updates...
	I0307 09:41:10.119069    2727 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 09:41:10.123098    2727 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 09:41:10.126162    2727 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 09:41:10.129078    2727 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 09:41:10.132107    2727 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 09:41:10.135390    2727 config.go:182] Loaded profile config "functional-618000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 09:41:10.135648    2727 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 09:41:10.139987    2727 out.go:177] * Using the qemu2 driver based on existing profile
	I0307 09:41:10.147090    2727 start.go:297] selected driver: qemu2
	I0307 09:41:10.147095    2727 start.go:901] validating driver "qemu2" against &{Name:functional-618000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-618000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 09:41:10.147130    2727 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 09:41:10.154008    2727 out.go:177] 
	W0307 09:41:10.158035    2727 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0307 09:41:10.161950    2727 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-618000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-618000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-618000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (115.913709ms)

                                                
                                                
-- stdout --
	* [functional-618000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 09:41:10.318624    2740 out.go:291] Setting OutFile to fd 1 ...
	I0307 09:41:10.318741    2740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:41:10.318744    2740 out.go:304] Setting ErrFile to fd 2...
	I0307 09:41:10.318747    2740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 09:41:10.318875    2740 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
	I0307 09:41:10.320293    2740 out.go:298] Setting JSON to false
	I0307 09:41:10.338536    2740 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2442,"bootTime":1709830828,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0307 09:41:10.338622    2740 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 09:41:10.343070    2740 out.go:177] * [functional-618000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	I0307 09:41:10.349986    2740 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 09:41:10.354143    2740 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	I0307 09:41:10.350098    2740 notify.go:220] Checking for updates...
	I0307 09:41:10.360028    2740 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0307 09:41:10.363075    2740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 09:41:10.366121    2740 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	I0307 09:41:10.367347    2740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 09:41:10.370337    2740 config.go:182] Loaded profile config "functional-618000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 09:41:10.370574    2740 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 09:41:10.378877    2740 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0307 09:41:10.382040    2740 start.go:297] selected driver: qemu2
	I0307 09:41:10.382051    2740 start.go:901] validating driver "qemu2" against &{Name:functional-618000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-618000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 09:41:10.382126    2740 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 09:41:10.389096    2740 out.go:177] 
	W0307 09:41:10.393081    2740 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0307 09:41:10.397040    2740 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1aa0f4a9-7b4e-4122-a3f8-00dd395e0ec7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004305041s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-618000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-618000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-618000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-618000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cb85f0df-efdd-410d-9cbc-9aaec5d9dff7] Pending
helpers_test.go:344: "sp-pod" [cb85f0df-efdd-410d-9cbc-9aaec5d9dff7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cb85f0df-efdd-410d-9cbc-9aaec5d9dff7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004075416s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-618000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-618000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-618000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [985a4f19-6ae6-4852-8aa7-de72b64dc9e6] Pending
helpers_test.go:344: "sp-pod" [985a4f19-6ae6-4852-8aa7-de72b64dc9e6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [985a4f19-6ae6-4852-8aa7-de72b64dc9e6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003796125s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-618000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.74s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh -n functional-618000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 cp functional-618000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2378064164/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh -n functional-618000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh -n functional-618000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1781/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "sudo cat /etc/test/nested/copy/1781/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1781.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "sudo cat /etc/ssl/certs/1781.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1781.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "sudo cat /usr/share/ca-certificates/1781.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/17812.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "sudo cat /etc/ssl/certs/17812.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/17812.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "sudo cat /usr/share/ca-certificates/17812.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-618000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-618000 ssh "sudo systemctl is-active crio": exit status 1 (62.721459ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-arm64 license: (1.230893s)
--- PASS: TestFunctional/parallel/License (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-618000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-618000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-618000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-618000 image ls --format short --alsologtostderr:
I0307 09:41:11.880327    2766 out.go:291] Setting OutFile to fd 1 ...
I0307 09:41:11.880626    2766 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 09:41:11.880630    2766 out.go:304] Setting ErrFile to fd 2...
I0307 09:41:11.880633    2766 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 09:41:11.880773    2766 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
I0307 09:41:11.881174    2766 config.go:182] Loaded profile config "functional-618000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 09:41:11.881240    2766 config.go:182] Loaded profile config "functional-618000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 09:41:11.882148    2766 ssh_runner.go:195] Run: systemctl --version
I0307 09:41:11.882158    2766 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/functional-618000/id_rsa Username:docker}
I0307 09:41:11.908664    2766 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-618000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-618000 | 37a31f2ebf72a | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | 9961cbceaf234 | 116MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | 05c284c929889 | 57.8MB |
| gcr.io/k8s-minikube/busybox                 | latest            | 71a676dd070f4 | 1.41MB |
| docker.io/library/nginx                     | alpine            | be5e6f23a9904 | 43.6MB |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 3ca3ca488cf13 | 68.4MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/localhost/my-image                | functional-618000 | 994b759295937 | 1.41MB |
| docker.io/library/nginx                     | latest            | 760b7cbba31e1 | 192MB  |
| gcr.io/google-containers/addon-resizer      | functional-618000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 04b4c447bb9d4 | 120MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-618000 image ls --format table --alsologtostderr:
I0307 09:41:18.028475    2779 out.go:291] Setting OutFile to fd 1 ...
I0307 09:41:18.028629    2779 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 09:41:18.028632    2779 out.go:304] Setting ErrFile to fd 2...
I0307 09:41:18.028634    2779 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 09:41:18.028756    2779 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
I0307 09:41:18.029206    2779 config.go:182] Loaded profile config "functional-618000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 09:41:18.029269    2779 config.go:182] Loaded profile config "functional-618000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 09:41:18.030261    2779 ssh_runner.go:195] Run: systemctl --version
I0307 09:41:18.030271    2779 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/functional-618000/id_rsa Username:docker}
I0307 09:41:18.061433    2779 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-618000 image ls --format json --alsologtostderr:
[{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1410000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"37a31f2ebf72aa498c3f2e13c462d108862ee288ff7e645b657d4499967cc200","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-618000"],"size":"30"},{"id":"760b7cbba31e196288effd2af6924c42637
ac5e0d67db4de6309f24518844676","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43600000"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"57800000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"994b759295937eecc99b04270cc78db808001d3d6f491056d4f77e09df296b6a","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-618000"],"size":"1410000"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"116000000"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":[],"repoTag
s":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"68400000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-618000"],"size":"32900000"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"120000000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"]
,"size":"85000000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-618000 image ls --format json --alsologtostderr:
I0307 09:41:17.947318    2777 out.go:291] Setting OutFile to fd 1 ...
I0307 09:41:17.947655    2777 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 09:41:17.947659    2777 out.go:304] Setting ErrFile to fd 2...
I0307 09:41:17.947662    2777 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 09:41:17.948479    2777 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
I0307 09:41:17.948982    2777 config.go:182] Loaded profile config "functional-618000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 09:41:17.949046    2777 config.go:182] Loaded profile config "functional-618000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 09:41:17.950089    2777 ssh_runner.go:195] Run: systemctl --version
I0307 09:41:17.950101    2777 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/functional-618000/id_rsa Username:docker}
I0307 09:41:17.982309    2777 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-618000 image ls --format yaml --alsologtostderr:
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-618000
size: "32900000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 37a31f2ebf72aa498c3f2e13c462d108862ee288ff7e645b657d4499967cc200
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-618000
size: "30"
- id: 760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "68400000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43600000"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "120000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "57800000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "116000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-618000 image ls --format yaml --alsologtostderr:
I0307 09:41:11.952688    2768 out.go:291] Setting OutFile to fd 1 ...
I0307 09:41:11.952831    2768 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 09:41:11.952834    2768 out.go:304] Setting ErrFile to fd 2...
I0307 09:41:11.952837    2768 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 09:41:11.952958    2768 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
I0307 09:41:11.953391    2768 config.go:182] Loaded profile config "functional-618000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 09:41:11.953445    2768 config.go:182] Loaded profile config "functional-618000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 09:41:11.954395    2768 ssh_runner.go:195] Run: systemctl --version
I0307 09:41:11.954403    2768 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/functional-618000/id_rsa Username:docker}
I0307 09:41:11.980658    2768 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-618000 ssh pgrep buildkitd: exit status 1 (60.824541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 image build -t localhost/my-image:functional-618000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-618000 image build -t localhost/my-image:functional-618000 testdata/build --alsologtostderr: (5.780878334s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-618000 image build -t localhost/my-image:functional-618000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 29396f6e8965
Removing intermediate container 29396f6e8965
---> 8d5ec7f2eb3a
Step 3/3 : ADD content.txt /
---> 994b75929593
Successfully built 994b75929593
Successfully tagged localhost/my-image:functional-618000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-618000 image build -t localhost/my-image:functional-618000 testdata/build --alsologtostderr:
I0307 09:41:12.084831    2772 out.go:291] Setting OutFile to fd 1 ...
I0307 09:41:12.085080    2772 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 09:41:12.085086    2772 out.go:304] Setting ErrFile to fd 2...
I0307 09:41:12.085088    2772 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 09:41:12.085220    2772 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18241-1349/.minikube/bin
I0307 09:41:12.085631    2772 config.go:182] Loaded profile config "functional-618000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 09:41:12.086384    2772 config.go:182] Loaded profile config "functional-618000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 09:41:12.087324    2772 ssh_runner.go:195] Run: systemctl --version
I0307 09:41:12.087332    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18241-1349/.minikube/machines/functional-618000/id_rsa Username:docker}
I0307 09:41:12.112600    2772 build_images.go:151] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3366834907.tar
I0307 09:41:12.112664    2772 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0307 09:41:12.116060    2772 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3366834907.tar
I0307 09:41:12.117880    2772 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3366834907.tar: stat -c "%s %y" /var/lib/minikube/build/build.3366834907.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3366834907.tar': No such file or directory
I0307 09:41:12.117896    2772 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3366834907.tar --> /var/lib/minikube/build/build.3366834907.tar (3072 bytes)
I0307 09:41:12.126906    2772 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3366834907
I0307 09:41:12.132507    2772 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3366834907 -xf /var/lib/minikube/build/build.3366834907.tar
I0307 09:41:12.136060    2772 docker.go:360] Building image: /var/lib/minikube/build/build.3366834907
I0307 09:41:12.136107    2772 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-618000 /var/lib/minikube/build/build.3366834907
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0307 09:41:17.820445    2772 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-618000 /var/lib/minikube/build/build.3366834907: (5.684509541s)
I0307 09:41:17.820797    2772 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3366834907
I0307 09:41:17.824250    2772 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3366834907.tar
I0307 09:41:17.827540    2772 build_images.go:207] Built localhost/my-image:functional-618000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3366834907.tar
I0307 09:41:17.827561    2772 build_images.go:123] succeeded building to: functional-618000
I0307 09:41:17.827567    2772 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 image ls
2024/03/07 09:41:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.217391333s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-618000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.26s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-618000 docker-env) && out/minikube-darwin-arm64 status -p functional-618000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-618000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-618000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-618000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-v57k9" [7bfb00f1-b699-4703-bf7d-7dd17706ba55] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-v57k9" [7bfb00f1-b699-4703-bf7d-7dd17706ba55] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.004158375s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 image load --daemon gcr.io/google-containers/addon-resizer:functional-618000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-618000 image load --daemon gcr.io/google-containers/addon-resizer:functional-618000 --alsologtostderr: (2.082971s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 image load --daemon gcr.io/google-containers/addon-resizer:functional-618000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-618000 image load --daemon gcr.io/google-containers/addon-resizer:functional-618000 --alsologtostderr: (1.458324542s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.292649709s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-618000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 image load --daemon gcr.io/google-containers/addon-resizer:functional-618000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-618000 image load --daemon gcr.io/google-containers/addon-resizer:functional-618000 --alsologtostderr: (1.901286041s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 service list -o json
functional_test.go:1490: Took "88.655417ms" to run "out/minikube-darwin-arm64 -p functional-618000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:32104
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:32104
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-618000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-618000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-618000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2571: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-618000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 image save gcr.io/google-containers/addon-resizer:functional-618000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-618000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-618000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [442d620e-af0b-4a6d-840d-d9cc770a9ce2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [442d620e-af0b-4a6d-840d-d9cc770a9ce2] Running
E0307 09:40:35.806845    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003830875s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 image rm gcr.io/google-containers/addon-resizer:functional-618000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-618000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 image save --daemon gcr.io/google-containers/addon-resizer:functional-618000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-618000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-618000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.163.233 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-618000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "112.359708ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "36.913667ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "115.08675ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "37.561417ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-618000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port173083272/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1709833258143293000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port173083272/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1709833258143293000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port173083272/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1709833258143293000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port173083272/001/test-1709833258143293000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-618000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (60.787333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar  7 17:40 created-by-test
-rw-r--r-- 1 docker docker 24 Mar  7 17:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar  7 17:40 test-1709833258143293000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh cat /mount-9p/test-1709833258143293000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-618000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2bc2637c-3594-4bde-bf46-63e52d60a520] Pending
helpers_test.go:344: "busybox-mount" [2bc2637c-3594-4bde-bf46-63e52d60a520] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2bc2637c-3594-4bde-bf46-63e52d60a520] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2bc2637c-3594-4bde-bf46-63e52d60a520] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004066833s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-618000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-618000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port173083272/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-618000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1771804843/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-618000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (65.238959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-618000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1771804843/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-618000 ssh "sudo umount -f /mount-9p": exit status 1 (62.871209ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-618000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-618000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1771804843/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-618000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup872211090/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-618000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup872211090/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-618000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup872211090/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-618000 ssh "findmnt -T" /mount1: exit status 1 (71.002792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-618000 ssh "findmnt -T" /mount2: exit status 1 (60.252375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-618000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-618000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-618000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup872211090/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-618000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup872211090/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-618000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup872211090/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.50s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-618000
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-618000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-618000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (302.95s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-951000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0307 09:41:57.726334    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
E0307 09:44:13.854517    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
E0307 09:44:41.561848    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/addons-040000/client.crt: no such file or directory
E0307 09:45:16.528042    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
E0307 09:45:16.534349    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
E0307 09:45:16.546419    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
E0307 09:45:16.568488    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
E0307 09:45:16.609479    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
E0307 09:45:16.691548    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
E0307 09:45:16.853685    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
E0307 09:45:17.175760    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
E0307 09:45:17.816171    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
E0307 09:45:19.098260    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
E0307 09:45:21.660283    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
E0307 09:45:26.782281    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
E0307 09:45:37.024096    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
E0307 09:45:57.505539    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-951000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (5m2.765810084s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (302.95s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (9.22s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-951000 -- rollout status deployment/busybox: (7.412472166s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- exec busybox-5b5d89c9d6-mvpvt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- exec busybox-5b5d89c9d6-psvlm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- exec busybox-5b5d89c9d6-vtfrb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- exec busybox-5b5d89c9d6-mvpvt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- exec busybox-5b5d89c9d6-psvlm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- exec busybox-5b5d89c9d6-vtfrb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- exec busybox-5b5d89c9d6-mvpvt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- exec busybox-5b5d89c9d6-psvlm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- exec busybox-5b5d89c9d6-vtfrb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (9.22s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (0.79s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- exec busybox-5b5d89c9d6-mvpvt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- exec busybox-5b5d89c9d6-mvpvt -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- exec busybox-5b5d89c9d6-psvlm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- exec busybox-5b5d89c9d6-psvlm -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- exec busybox-5b5d89c9d6-vtfrb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-951000 -- exec busybox-5b5d89c9d6-vtfrb -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (0.79s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (51.25s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-951000 -v=7 --alsologtostderr
E0307 09:46:38.466372    1781 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18241-1349/.minikube/profiles/functional-618000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-951000 -v=7 --alsologtostderr: (51.012446959s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (51.25s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-951000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.15s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (0.26s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (0.26s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (4.39s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp testdata/cp-test.txt ha-951000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp ha-951000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMutliControlPlaneserialCopyFile4086841609/001/cp-test_ha-951000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp ha-951000:/home/docker/cp-test.txt ha-951000-m02:/home/docker/cp-test_ha-951000_ha-951000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m02 "sudo cat /home/docker/cp-test_ha-951000_ha-951000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp ha-951000:/home/docker/cp-test.txt ha-951000-m03:/home/docker/cp-test_ha-951000_ha-951000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m03 "sudo cat /home/docker/cp-test_ha-951000_ha-951000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp ha-951000:/home/docker/cp-test.txt ha-951000-m04:/home/docker/cp-test_ha-951000_ha-951000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m04 "sudo cat /home/docker/cp-test_ha-951000_ha-951000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp testdata/cp-test.txt ha-951000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp ha-951000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMutliControlPlaneserialCopyFile4086841609/001/cp-test_ha-951000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp ha-951000-m02:/home/docker/cp-test.txt ha-951000:/home/docker/cp-test_ha-951000-m02_ha-951000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000 "sudo cat /home/docker/cp-test_ha-951000-m02_ha-951000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp ha-951000-m02:/home/docker/cp-test.txt ha-951000-m03:/home/docker/cp-test_ha-951000-m02_ha-951000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m03 "sudo cat /home/docker/cp-test_ha-951000-m02_ha-951000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp ha-951000-m02:/home/docker/cp-test.txt ha-951000-m04:/home/docker/cp-test_ha-951000-m02_ha-951000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m04 "sudo cat /home/docker/cp-test_ha-951000-m02_ha-951000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp testdata/cp-test.txt ha-951000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp ha-951000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMutliControlPlaneserialCopyFile4086841609/001/cp-test_ha-951000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp ha-951000-m03:/home/docker/cp-test.txt ha-951000:/home/docker/cp-test_ha-951000-m03_ha-951000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000 "sudo cat /home/docker/cp-test_ha-951000-m03_ha-951000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp ha-951000-m03:/home/docker/cp-test.txt ha-951000-m02:/home/docker/cp-test_ha-951000-m03_ha-951000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m02 "sudo cat /home/docker/cp-test_ha-951000-m03_ha-951000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp ha-951000-m03:/home/docker/cp-test.txt ha-951000-m04:/home/docker/cp-test_ha-951000-m03_ha-951000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m04 "sudo cat /home/docker/cp-test_ha-951000-m03_ha-951000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp testdata/cp-test.txt ha-951000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp ha-951000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMutliControlPlaneserialCopyFile4086841609/001/cp-test_ha-951000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp ha-951000-m04:/home/docker/cp-test.txt ha-951000:/home/docker/cp-test_ha-951000-m04_ha-951000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000 "sudo cat /home/docker/cp-test_ha-951000-m04_ha-951000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp ha-951000-m04:/home/docker/cp-test.txt ha-951000-m02:/home/docker/cp-test_ha-951000-m04_ha-951000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m02 "sudo cat /home/docker/cp-test_ha-951000-m04_ha-951000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 cp ha-951000-m04:/home/docker/cp-test.txt ha-951000-m03:/home/docker/cp-test_ha-951000-m04_ha-951000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-951000 ssh -n ha-951000-m03 "sudo cat /home/docker/cp-test_ha-951000-m04_ha-951000-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (4.39s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (79.98s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m19.979404875s)
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (79.98s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.07s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-564000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-564000 --output=json --user=testUser: (1.748036458s)
--- PASS: TestJSONOutput/stop/Command (1.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-139000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-139000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.943417ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"94c07152-25e2-4b93-adbb-2a4ceb435fca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-139000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e6d2b3a7-1414-41be-af65-81ef900f8dd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18241"}}
	{"specversion":"1.0","id":"166f531e-055b-4e2e-b0b9-560ad4346400","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig"}}
	{"specversion":"1.0","id":"43be918a-4688-4106-944c-7df6e24fc03a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"aaa5562b-f046-46e8-a7d9-f3fa841c434b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e92aeae9-6679-4ef6-82e2-630331bdf5e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube"}}
	{"specversion":"1.0","id":"5f4e5fab-efe2-4c9e-815d-2b0fea0b03a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3440af58-5403-4d02-ae62-2495c6c27a1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-139000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-139000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-123000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-123000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (100.649375ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-123000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18241-1349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18241-1349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-123000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-123000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.445417ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-123000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-123000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.704732459s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.634300042s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-123000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-123000: (3.31297125s)
--- PASS: TestNoKubernetes/serial/Stop (3.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-123000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-123000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.752209ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-123000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-123000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-853000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-658000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-658000 --alsologtostderr -v=3: (3.510193875s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-658000 -n old-k8s-version-658000: exit status 7 (55.736083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-658000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-594000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-594000 --alsologtostderr -v=3: (3.6006095s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-594000 -n no-preload-594000: exit status 7 (61.196167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-594000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-138000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-138000 --alsologtostderr -v=3: (3.496925875s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-138000 -n embed-certs-138000: exit status 7 (70.960292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-138000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-056000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-056000 --alsologtostderr -v=3: (3.704432417s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-056000 -n default-k8s-diff-port-056000: exit status 7 (62.749708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-056000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-706000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-706000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-706000 --alsologtostderr -v=3: (3.010650667s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-706000 -n newest-cni-706000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-706000 -n newest-cni-706000: exit status 7 (63.604417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-706000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/281)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-819000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-819000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-819000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-819000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-819000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-819000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-819000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-819000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-819000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-819000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-819000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-819000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-819000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-819000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-819000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-819000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-819000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-819000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-819000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-819000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-819000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-819000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-819000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-819000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-819000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-819000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-819000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-819000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-819000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-819000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-819000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-819000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-819000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-819000"

                                                
                                                
----------------------- debugLogs end: cilium-819000 [took: 2.264825959s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-819000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-819000
--- SKIP: TestNetworkPlugins/group/cilium (2.50s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-432000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-432000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard